Cocos2d-x setScaleX() - c++

So I was programming a game in cocos2d-x and I need one of the sprites to get wider for a certain amount of time, so I tried with the method setScaleX(). The problem is that the content size of the sprite does not actually change, and since my collision system is based on the content size of the sprite, they do not properly work. Here is the code I use for scaling:
bar = Sprite::create( "Bar.png" );
CCLOG("Size: %f,%f.", bar->getContentSize().width, bar->getContentSize().height);
bar->setScaleX(1.5);
CCLOG("Size: %f,%f.", bar->getContentSize().width, bar->getContentSize().height);
The output is exactly the same on both cases. Is there any way of fixing this?

ContentSize represent the original texture size unless you set it using setContentSize method.
You can either multiply size with scale factor or use boundingBox().size to know about current size of scaled sprite(if not rotated or skewed).

cocos2d::Size size1 = cocos2d::CCDirector::getInstance()->getWinSize();
scaleX=size1.width/768;
scaleY=size1.height/1024;
Sprite *sp1=Sprite::create("01.png");
sp1->setScaleX(scaleX);
sp1->setScaleY(scaleY);
this->addChild(sp1);

Scale for Landscape:
cocos2d::Size size = cocos2d::CCDirector::getInstance()->getWinSize();
scaleX=size.width/1024;
scaleY=size.height/768;
Scale for Portrait:
cocos2d::Size size = cocos2d::CCDirector::getInstance()->getWinSize();
scaleX=size.width/768;
scaleY=size.height/1024;
Sprite *sp=Sprite::create("01.png");
sp->setScaleX(scaleX);
sp->setScaleY(scaleY);
this->addChild(sp);

Related

Pygame: slow performance using pygame.Surface and convert_alpha()

I'm developing a simple tile game that displays a grid image and paints it with successive layers of images. So I have-
list_of_image_tiles = { GRASS: pygame.image.load('/grass.png').convert_alpha(), TREES: pygame.image.load('/trees.png').convert_alpha(), etc}
Then later on I blit these-
DISPLAYSURF.blit(list_of_images[lists_of_stuff][TREES], (col*TILESIZE,row*TILESIZE))
DISPLAYSURF.blit(list_of_images[lists_of_stuff][GRASS], (col*TILESIZE,row*TILESIZE))
Note that for brevity I've not included a lot of code but it does basically work- except performance is painfully slow. If I comment out the DISPLAYSURF stuff performance leaps forward, so I think I need a better way to do the DISPLAYSURF stuff, or possibly the pygame.image.load bits (is convert_alpha() the best way, bearing in mind I need the layered-image approach?)
I read something called psycho might help, but not sure how to fit that in. Any ideas how to improve the performance most welcome.
There are a couple of things you can do.
Perform the "multi-layer" blit just once to a surface then just blit that surface every frame to the DISPLAYSURF.
Identify parts of the screen that need to be updated and use screen.update(rectangle_list) instead of screen.flip().
Edit to add example of 1.
Note: you didn't give much of your code, so I just fit this with how I do it.
# build up the level surface once when you enter a level.
level = Surface((LEVEL_WIDTH * TILESIZE, LEVEL_HIGHT * TILESIZE))
for row in range(LEVEL_HIGHT):
for col in range(LEVEL_WIDTH):
level.blit(list_of_images[lists_of_stuff][TREES], (col * TILESIZE, row * TILESIZE))
level.blit(list_of_images[lists_of_stuff][GRASS], (col * TILESIZE, row * TILESIZE))
then in main loop during draw part
# blit only the part of the level that should be on the screen
# view is a Rect describing what tiles should be viewable
disp = DISPLAYSURF..get_rect()
level_area = Rect((view.left * TILESIZE, view.top * TILESIZE), disp.size)
DISPLAYSURF.blit(level, disp, area = level_area)
You should use colorkey whenever you dont need per pixel alpha. I just changed all convert_alphas in my code to simple convert and set color key for fully opaque parts of image. Performance increase TEN FOLD!

High-DPI scaling of QQuickItem-derived class

I use QtQuickControls 2 together with QQuickItem-derived class in my app. After I set AA_EnableHighDpiScaling attribute and all QQuickControls 2 components look correctly on my smartphone but object of my custom class is scaled incorrectly. Here is the app without HighDpi scaling with minimum zoom(the way it is meant to work):
And here is the one with scaling with minimum zoom:
It seems that on the second screen the object is scaled too much and I can see square pixels of all textures that I draw with QPixmap or QImage. However, the images that I load from external memory and nodes like QSGGeometryNode look correct. Can I switch off scaling for just one particular QQuickItem? If no, what should I set to render it correctly?
Also, when I try to set opacity on QQuickItem with a lot of QSGOpacityNodes in scene graph node tree I get segmentation fault. What can cause this?
So I solved this problem by dividing the size of QSGTexture by QQuickWindow::effectiveDevicePixelRatio() and also multiplying the size of the image from which texture is created by this ratio.
If you are drawing the text using on QImage you should also multiply your font's size by this ratio. The same thing should be done with geometrical shapes and QPixmap::scaled().

How to rotate an image around its centre in QT QWidgets C++?

I am trying to rotate am image around its origin(center) in QT using QWidgts in C++. I experimented a lot of things here, but no matter what I do, the image keeps rotating around some arbitrary position I have no clue of. Kindly, help me out here. I am new to QT.
void gaugeWithRedZoneImage::rotate()
{
QPixmap pixmap(*gaugeMainScreen->pixmap());
QMatrix rm;
rm.translate(0, 0);
rm.rotate(-360);
pixmap = pixmap.transformed(rm);
gaugeMainScreen->setPixmap(pixmap);
/*QTransform rotate_disc;
rotate_disc.translate(pixmap.width()/2.0 , pixmap.height()/2.0);
rotate_disc.rotate(-60);
rotate_disc.translate(-(pixmap.width()/2.0) , -(pixmap.height()/2.0));
pixmap = pixmap.transformed(rotate_disc);
gaugeMainScreen->setPixmap(pixmap);*/
}
Form the documentation of QPixmap::transformed():
The transformation transform is internally adjusted to compensate for unwanted translation; i.e. the pixmap produced is the smallest pixmap that contains all the transformed points of the original pixmap.
This means that the method ensures no clipping takes place by appending the canvas. No matter what your rotation center was, the automatic extension of canvas will almost always result in a perceived shift.
Image examples might help to further diagnose the problem.
As ypnos said, your problem isn't the rotation center. When you rotate your image, its width and height will most likely change and no longer fit your container (gaugeMainScreen) dimensions.
You have some possibilities to overcome this problem. One of them is to set your container to scale its contents (you can use the method setScaledContents()). In this case, you have to keep the original image around and use it whenever you apply a rotation, otherwise your image will appear increasingly smaller.

Cocos2d CCSprite Resize Image

I have 2 points A and B. The distance is 100 and my sprite image is 50. My question is can I resize the sprite from the center of the image in order to keep the quality and if its possible how can I do that? I tried with this code, but it just scale the image width and look awful.
-(void)resizeSprite:(CCSprite*)sprite toWidth:(float)width toHeight:(float)height {
sprite.scaleX = width / sprite.contentSize.width;
sprite.scaleY = height / sprite.contentSize.height;
Not sure if this is what you're after - but if you just want to scale the sprite to twice the size without artefacts you can use linear filtering on the sprite texture
[sprite.texture setAliasTexParameters];
Then just scale it to whatever you like
[sprite setScale: 2.f];

Drawing large numbers of pixels in OpenGL

I've been working on some sound processing code and now I'm doing some visualizations. I finished making a spectrogram spectrogram, but how I am drawing it is too slow.
I'm using OpenGL to do 2D drawing, which has made searching for help more difficult. Also I am very new to OpenGL, so I don't know the standard way things are done.
I am storing the r,g,b values for each pixel in a large matrix.
Each time I get a small sound segment, I process it and convert it to column of pixels. Everything is shifted to the left 1 pixel, and the new line is put at the end.
Each time I redraw, I am looping through setting the color and drawing each pixel individually, which seems like a horribly inefficient way to do this.
Is there a better way to do this? Is there some method for simply shifting a bunch of pixels over?
They are many ways to improve your drawing speed.
The simplest would be to allocate a an RGB texture that you will draw using a screen aligned texture quad.
Each time that you want to draw a new line you can use glTexSubImage2d to a load a new subset of the texture and then you redraw the quad.
Are you perhaps passing a lot more data to the graphics card than you have pixels? This could happen if your FFT size is much larger than the height of the drawing area or the number of spectral lines is a lot more than its width. If so, it's possible that the bottle neck could be passing too much data across the bus. Try reducing the number of spectral lines by either averaging them or picking (taking the maximum in each bin for a set of consecutive lines).
GL_POINTS, VBO, GL_STREAM_DRAW.
I know this is an old question, but . . .
Use a circular buffer to store the pixels, and then simply call glDrawPixels twice with the appropriate offsets. Something like this untested C:
#define SIZE_X 800
#define SIZE_Y 600
unsigned char pixels[SIZE_Y][SIZE_X*2][3];
int start = 0;
void add_line(const unsigned char line[SIZE_Y][1][3]) {
int i,j,coord=(start+SIZE_X)%(2*SIZE_X);
for (i=0;i<SIZE_Y;++i) for (j=0;j<3;++j) pixels[i][coord][j] = line[i][0][j];
start = (start+1) % (2*SIZE_X);
}
void draw(void) {
int w;
w = 2*SIZE_X-start;
if (w!=0) glDrawPixels(w,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,3*sizeof(unsigned char)*SIZE_Y*start+pixels);
w = SIZE_X - w;
if (w!=0) glDrawPixels(SIZE_X,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,pixels);
}