This does not work:
CCSprite *testscale=[CCSprite spriteWithSpriteFrame:starFrame];
testscale.scale=0.5;
float starWidth=testscale.contentSizeInPixels.width;
CCLOG(#"contentpixels: %f contentsize: %f",starWidth, testscale.contentSize.width);
The two outputs in CCLOG both show the original pixel size of the sprite, not the size after scaling.
Is there a way to get it without doing this?...
float displayWidth=starWidth*testscale.scale;
Use the boundingBox property of CCNode:
[testscale boundingBox].size.width
[testscale boundingBox].size.height
This should give you the width and height you want, taking into account any transformation (scaling, rotation) you have made to the sprite.
Related
I want rotate a QGraphicsPixmapItem around a point according to mouse position.
So i tried this:
void Game::mouseMoveEvent(QMouseEvent* e){
setMouseTracking(true);
QPoint midPos((sceneRect().width() / 2), 0), currPos;
currPos = QPoint(mapToScene(e->pos()).x(), mapToScene(e->pos()).y());
QPoint itemPos((midPos.x() - cannon->scenePos().x()), (midPos.y() - cannon->scenePos().y()));
double angle = atan2(currPos.y(), midPos.x()) - atan2(midPos.y(), currPos.x());
cannon->setTransformOriginPoint(itemPos);
cannon->setRotation(angle); }
But the pixmap moves a few of pixels.
I want a result like this:
Besides the mixup of degrees and radians that #rafix07 pointed out there is a bug in the angle calculation. You basically need the angle of the line from midPos to currPos which you calculate by
double angle = atan2(currPos.y() - midPos.y(), currPos.x() - midPos.x());
Additionally the calculation of the transformation origin assumes the wrong coordinate system. The origin must be given in the coordinate system of the item in question (see QGraphicsItem::setTransformOriginPoint), not in scene coordinates. Since you want to rotate around the center of that item it would just be:
QPointF itemPos(cannon->boundingRect().center());
Then there is the question whether midPos is actually the point highlighted in your image in the middle of the canon. The y-coordinate is set to 0 which would normally be the edge of the screen, but your coordinate system may be different.
I would assume the itemPos calculated above is just the right point, you only need to map it to scene coordinates (cannon->mapToScene(itemPos)).
Lastly I would strongly advise against rounding scene coordinates (which are doubles) to ints as it is done in the code by forcing it to QPoints instead of QPointFs. Just use QPointF whenever you are dealing with scene coordinates.
I am trying to implement a logic where, on mouse click, a shot is fired at an object.To do so, I did the following,
I first considered the .obj file of my model and found the region (list of coordinates) that the shot works on (a particular weak point of the body).
I then considered the least and largest x,y and z values present in the file for that particular region (xmin,ymin,zmin and xmax,ymax,zmax).
To figure out whether the shot has landed on the weak point, I considered the assumption that a shot would land on the weak point, if the coordinates of the shot lie between (xmin,ymin,zmin) and (xmax,ymax,zmax).
I assumed the coordinates from the .obj file to be the actual coordinates of the model, since the assimp code I have directly loads in the coordinates of the model. Considering (xmin,ymin,zmin) and (xmax,ymax,zmax), I converted the coordinates to window coordinates via gluProject().
I then considered the current cursor position and checked if the cursor position lies between (xmin,ymin,zmin) and (xmax,ymax,zmax).
The problems I now face are:
The object coordinates provided in the .obj file range between -4 to 4, which then lie around 1.0 after gluProject(), whereas the cursor position lies between (0,0) and (1280,720).
After gluProject(), (xmin,ymin) and (xmax,ymax) are either (0,1) or (1,0) whereas the zmin and zmax values seem fine.
How can I get my logic working ?
Here is the code:
// Call shader to draw and acquire necessary information for gluProject()
modelShader.use();
modelShader.setMat4("projection", projection);
modelShader.setMat4("view", view);
glm::mat4 model_dragon;
double time=glfwGetTime();
model_dragon=glm::translate(model_dragon, glm::vec3(cos((360.0-time)/2.0)*60.0,cos(((360.0-time)/2.0))*(-2.5),sin((360-time)/1.0)*60.0));
model_dragon=glm::rotate(model_dragon,(float)(glm::radians(30.0)),glm::vec3(0.0,0.0,1.0));
model_dragon=glm::scale(model_dragon,glm::vec3(1.4,1.4,1.4));
modelShader.setMat4("model", model_dragon);
collision_model=model_dragon;collision_view=view;collision_proj=projection; //so that I can provide the view,model and projection required for gluProject()
ourModel.Draw(modelShader);
Mouse button callback
// Note: dragon_min and dragon_max variables hold the constant position of the min and max coordinates.
void mouse_button_callback(GLFWwindow* window,int button,int action,int mods){
if(button==GLFW_MOUSE_BUTTON_LEFT && action==GLFW_PRESS){
Mix_PlayChannel( -1, shot, 0 ); //Play sound
GLdouble x,y,xmin,ymin,zmin,xmax,ymax,zmax,dmodel[16],dproj[16];
GLint dview[16];
float *model = (float*)glm::value_ptr(collision_model);
float *proj = (float*)glm::value_ptr(collision_proj);
float *view = (float*)glm::value_ptr(collision_view);
for (int i = 0; i < 16; ++i){dmodel[i]=model[i];dproj[i]=proj[i];dview[i]=(int)view[i];} // Convert mat4 to double array
glfwGetCursorPos(window,&x,&y);
gluProject(dragon_min_x,dragon_min_y,dragon_min_z,dmodel,dproj,dview,&xmin,&ymin,&zmin);
gluProject(dragon_max_x,dragon_max_y,dragon_max_z,dmodel,dproj,dview,&xmax,&ymax,&zmax);
if((x>=xmin && x<=xmax) && (y>=ymin && y<=ymax)){printf("Hit\n");defense--;}
The .obj coordinates have eg. values as shown:
0.032046 1.533727 4.398055
You are confusing the parameters of gluProject, especially the view parameter. This parameter should contain 4 integers which describe the viewport (x,y,width,height) and not the view matrix.
gluProject (and a lot of other glu functions) are tailored towards the fixed function pipeline and their matrix stacks. Due to this, you have to pass the following information:
model: The modelview matrix, as returned by glGetDoublev( GL_MODELVIEW_MATRIX, ...)).
proj: The projection matrix, as returned by glGetDoublev( GL_PROJECTION_MATRIX, ...).
view: The current viewport, as returned by glGetIntegerv( GL_VIEWPORT, ...)
As you see, the view matrix is packed together with the model matrix and view contains the viewport.
I'd strongly advice not to use glu functions at all when working with modern OpenGL. Especially when the matrices are already stored in glm, it would be better to use glm::project.
Note1: Converting a floating point matrix to an integer matrix by casting each element almost never results in anything meaningful.
Note2: When projecting a bounding rectangle to screenspace, the result will in general not be a rectangle anymore. During projection, angles are not preserved, thus the result is a general four cornered polygon and not a rectangle anymore. Same goes for bounding boxes: You can't even guarantee that the projected box is contained in the screen-space rectangle defined by projecting [x_min, y_min, z_min] and [x_max, y_max, z_max].
What is the best technique to scale a sprite to an exact size. The scale property is a multiplier, but if you want a sprite to be exactly X pixels wide, is there a simple technique? Or, would it require simply using the desired size and the sprites actual contentsize to calculate the necessary scale operation?
I believe this works:
-(void)resizeSprite:(CCSprite*)sprite toWidth:(float)width toHeight:(float)height {
sprite.scaleX = width / sprite.contentSize.width;
sprite.scaleY = height / sprite.contentSize.height;
}
Put it in your game, and use like this:
[self resizeSprite:mySprite toWidth:350 toHeight:400];
You can use these two properties : scaleX and scaleY .
for example, use in a HP Bar.
hpBarSprite.scaleX = hpCurrent / hpMax ;
I am using the D3DXSPRITE method to draw my map tiles to the screen, i just added a zoom function which zooms in when you hold the up arrow, but noticed you can now see gaps between the tiles, here's some screen shots
normal size (32x32) per tile
zoomed in (you can see white gaps between the tiles)
zoomed out (even worst!)
Here's the code snipplet which I translate and scale the world with.
D3DXMATRIX matScale, matPos;
D3DXMatrixScaling(&matScale, zoom_, zoom_, 0.0f);
D3DXMatrixTranslation(&matPos, xpos_, ypos_, 0.0f);
device_->SetTransform(D3DTS_WORLD, &(matPos * matScale));
And this is my drawing of the map, (tiles are in a vector of a vector of tiles.. and I haven't done culling yet)
LayerInfo *p_linfo = NULL;
RECT rect = {0};
D3DXVECTOR3 pos;
pos.x = 0.0f;
pos.y = 0.0f;
pos.z = 0.0f;
for (short y = 0;
y < BottomTile(); ++y)
{
for (short x = 0;
x < RightTile(); ++x)
{
for (int i = 0; i < TILE_LAYER_COUNT; ++i)
{
p_linfo = tile_grid_[y][x].Layer(i);
if (p_linfo->Visible())
{
p_linfo->GetTextureRect(&rect);
sprite_batch->Draw(
p_engine_->GetTexture(p_linfo->texture_id),
&rect, NULL, &pos, 0xFFFFFFFF);
}
}
pos.x += p_engine_->TileWidth();
}
pos.x = 0;
pos.y += p_engine_->TileHeight();
}
Your texture indices are wrong. 0,0,32,32 is not the correct value- it should be 0,0,31,31. A zero-based index into your texture atlas of 256 pixels would yield values of 0 to 255, not 0 to 256, and a 32x32 texture should yield 0,0,31,31. In this case, the colour of the incorrect pixels depends on the colour of the next texture along the right and the bottom.
That's the problem of magnification and minification. Your textures should have invisible border populated with part of adjacent texture. Then magnification and minification filters will use that border to calculate color of edge pixels rather than default (white) color.
I think so.
I also had a similar problem with texture mapping. What worked for me was changing the texture address mode in the sampler state description; texture address mode is used to control what direct3d does with texture coordinates outside of the ([0.0f, 1.0f]) range: i changed the ADDRESS_U, ADDRESS_V, ADDRESS_W members to D3D11_TEXTURE_ADDRESS_CLAMP which basically clamps all out-of-range values for the texture coordinates into the [0.0f, 1.0f] range.
After a long time searching and testing people solutions I found this rules are the most complete rules that I've ever read.
pixel-perfect-2d from Official Unity WebSite
plus with my own experience i found out that if sprite PPI is 72(for example), you should try to use more PPI for that Image(96 maybe or more).It actually make sprite more dense and make no space for white gaps to show up.
Welcome to the world of floating-point. Those gaps exist due to imperfections using floating-point numbers.
You might be able to improve the situation by being really careful when doing your floating-point math but those seams will be there unless you make one whole mesh out of your terrain.
It's the rasterizer that given the view and projection matrix as well as the vertex positions is slightly off. You maybe able to improve on that but I don't know how successful you'll be.
Instead of drawing different quads you can index only the visible vertexes that make up your terrain and instead use texture tiling techniques to paint different stuff on there. I believe that won't get you the ugly seam because in that case, there technically isn't one.
Once more something relatively simple, but confused as to what they want.
the method to find distance on cartesian coordinate system is
distance=sqrt[(x2-x1)^2 + (y2-y1)^2]
but how do i apply it here?
//Requires: testColor to be a valid Color
//Effects: returns the "distance" between the current Pixel's color and
// the passed color
// uses the standard method to calculate "distance"
// uses the same formula as finding distance on a
// Cartesian coordinate system
double colorDistance(Color testColor) const;
The color class has defined colors:
int red,green,blue
Do i define something like 'oldGreen' 'oldRed' 'oldBlue' and get the distance that way? The passed color being red,green,blue ?
http://pastebin.com/v9K30dc7
I'm guessing they want you to use:
distance = sqrt[(r2-r1)^2 + (g2-g1)^2 + (b2-b1)^2]
break the color into it's red green and blue components, and use the same method, ie sqrt(sqr(delta red)+sqr(delta blue)+sqr(delta green))
note, this is not a really great method, as it doesn't allow for gamma or even the more complicated case of human perception. read http://en.wikipedia.org/wiki/Color_difference for more exotic methods.