Change png that is used by sprite - cocos2d-iphone

In cocos2d-x, how can I change the png that is used by a sprite?
The following works, however it seems a bit longwinded and I was wondering if there is an alternative that prevents me from having to call new?
// create sprite with original png
m_pSpr = CCSprite::create( "1.png" );
m_pSpr->setPosition( ccp( 100, 100 ) );
this->addChild( m_pSpr );
// now change the png that is used by the sprite
// new image from png file
CCImage* img = new CCImage();
img->initWithImageFile( "2.png", CCImage::kFmtPng );
// new texture from image
CCTexture2D* tex = new CCTexture2D();
tex->initWithImage( img );
// finally change texture of sprite
m_pSpr->setTexture( tex );

Pack your sprites into a spritesheet, then use CCSprite's setDisplayFrame().
// make sure the spritesheet is cached
auto cacher = CCSpriteFrameCache::sharedSpriteFrameCache();
cacher->addSpriteFramesWithFile("Spritesheet.plist");
// create the sprite
m_pSpr = CCSprite::create( "1.png" );
// set it's display frame to 2.png
CCSpriteFrame* frame = cacher->spriteFrameByName("2.png");
if( frame)
m_pSpr->setDisplayFrame(frame);

You should not use setTexture method for a single sprite. If you will pack your sprites to atlases(single texture, for example, 2048x2048 pixels with many different frames inside it, that allows to take less memory), this method will set whole huge texture to your sprite. Use setDisplayFrame instead

you can, but what Morion said is correct, try to avoid using below code because it is expensive. try to use TexturePacker and deal with Sprite frames is a good idea.
yourSprite->setTexture(CCTextureCache::sharedTextureCache()->addImage("2.png"));

Related

XCopyArea fails for X11 bitmap (Pixmap with depth 1)

I want to do texture atlas with Xlib in X11. I Created a pixmap by loading pixel data from an image file which contains all sprites that will be used as texture. I can copy part of texture atlas pixmap (single spirit) to another pixmap created as off-screen drawable successfully.
Here comes the problem. I want the texture copied to the destination pixmap with partial transparent so that there will not be a background rectangle appears behind each spirit. To do that I created a pixmap with depth equals 1 for the whole texture atlas image(500 * 500).
The pMaskData is the pixel data with depth 1.
Pixmap texAtlasMask = XCreatePixmapFromBitmapData(kTheDisplay, kRootWindow,
(char*)pMaskData, 500, 500, 1, 0, 1);
Then I create a clip_mask pixmap for a single sprite, the size of the sprite is 16*16, by first creating a 1 depth pixmap:
Pixmap clipMask = XCreatePixmap(kTheDisplay, kRootWindow, 16, 16, 1);
then use the following call to fill the content of clipMask:
// Error occurs here
// reqest code: 62:X_CopyArea
// Error code: 8:BadMatch (invalid parameter attributes)
XCopyArea(kTheDisplay, texAtlasMask, clipMask, m_gc, 0, 0,16, 16, 0, 0);
After that:
XSetClipMask(kTheDisplay, m_gc, clipMask);
// Copy source spirit to backing store pixmap
XSetClipOrigin(kTheDisplay, m_gc, destX, destY);
XCopyArea(kTheDisplay, m_symAtlas, m_backStore, m_gc, srcLeft, srcTop,
width, height, destX, destY);
The m_symAtlas is the texture atlas pixmap, m_backStore is the destination pixmap we are drawing to.
As listed above error happens in the first call of XCopyArea. I tried XCopyPlane, but nothing changed.
And I play around with XCopyArea and found that as long as the pixmap 's depth is 32 the XCopyArea works fine, it fails when the depth is not 32. Any idea what is wrong?

QT, C++: fast way to draw live image stream from camera on QGraphicsview

I'm writing a QT GUI application in wich a live stream of a connected camera is shown on a QGraphicsview. Therefore an openCV image is first converted to a QImage and than to a QPixmap. This is added to the QGraphicsScene of the QGraphicsView.
The bandwidth is not a problem, the cameras are connected via ethernet or USB.
I am testing the performance with the Analyze Toole build in Visual Studio 2012 and it shows that the conversion to the QPixmap is very slow and takes 60% of the computation time (of displaying the image), so that I end up with 1 FPS or so. The images are 2560 by 1920 or even bigger. Scaling the cv::Ptr stream_image befor converting it to a QImage improves the performance significantly but I need all the image detail in the image.
EDIT
Here is some code how I do the conversion:
cv::Ptr<IplImage> color_image;
// stream_image is a cv::Ptr<IplImage> and holds the current image from the camera
if (stream_image->nChannels != 3) {
color_image = cvCreateImage(cvGetSize(stream_image), IPL_DEPTH_8U, 3);
cv::Mat gr(stream_image);
cv::Mat col(color_image);
cv::cvtColor(gr, col, CV_GRAY2BGR);
}
else {
color_image = stream_image;
}
QImage *tmp = new QImage(color_image->width, color_image->height, QImage::Format_RGB888);
memcpy(tmp->bits(), color_image->imageData, color_image->width * color_image->height * 3);
// update Scene
m_pixmap = QPixmap::fromImage(*tmp); // this line takes the most time!!!
m_scene->clear();
QGraphicsPixmapItem *item = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(0,0, m_pixmap.width(), m_pixmap.height());
delete tmp;
m_ui->graphicsView->fitInView(m_scene.sceneRect(),Qt::KeepAspectRatio);
m_ui->graphicsView->update();
EDIT 2
I tested the method from from Thomas answer, but it is as slow as my method.
QPixmap m_pixmap = QPixmap::fromImage(QImage(reinterpret_cast<uchar const*>(color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_RGB888));
EDIT 3
I tried to incorporate Thomas second suggestion:
color_image = cvCreateImage(cvGetSize(resized_image), IPL_DEPTH_32F, 3);
//[...]
QPixmap m_pixmap = QPixmap::fromImage(QImage(
reinterpret_cast<uchar const*>( color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_RGB32));
But that crashes when the drawEvent of the Widget is called.
Q: Is there a way to display the image stream in a QGraphicsView without converting it to a QPixmap first or any other fast/performant way? The QGraphicsView is importent since I want to add overlays to the image.
I have figured out a solution that works for me but also tested a little with different methods and how they perform:
Method one is performant even in debug mode and takes only 23.7 % of the execution time of the drawing procedure (using ANALYZE in VS2012):
color_image = cvCreateImage(cvGetSize(stream_image), IPL_DEPTH_8U, 4);
cv::Mat gr(stream_image);
cv::Mat col(color_image);
cv::cvtColor(gr, col, CV_GRAY2RGBA,4);
QPixmap m_pixmap = QPixmap::fromImage(QImage(reinterpret_cast<uchar const*>( color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_ARGB32));
Method two is still performant in debug mode taking 42,1% of the execution time. when the following enum is used in QPixmap::fromeImage instead
QImage::Format_RGBA8888
Method three is the one I showed in my question and it is very slow in debug builds being responsible for 68,3% of the drawing workload.
However, when I compile in release all three methods are seamingly equally performant.
This is what I usually do. Use one of the constructors for QImage that uses an existing buffer and then use QPixmap::fromImage for the rest. The format of the buffer should be compatible with the display, such as QImage::Format_RGB32. In this example a vector serves as the storage for the image.
std::vector<QRgb> image( 2560 * 1920 );
QPixmap pixmap = QPixmap::fromImage( QImage(
reinterpret_cast<uchar const*>( image.data() ),
2560,
1920,
QImage::Format_RGB32 ) );
Note the alignment constraint. If the alignemnt is not 32-bit aligned, you can use one of the constructors that takes a bytesPerLine argument.
Edit:
If your image is 32bit, then you can write.
QPixmap pixmap = QPixmap::fromImage( QImage(
reinterpret_cast<uchar const*>( color_image->imageData ),
color_image->width,
color_image->height,
QImage::Format_RGB32 ) );

Image manipulation: bitmap rotation (C++/GDI+)

I spend much time trying to find solution but cannot. I hope you can help me. The code is bit longer so I give here just the part where I have the problem. My code captures bitmap from window and its saved in HBitmap. I need to do rotation of the bitmap. So I start GDI+ and create bitmap pBitmap from HBitmap:
// INIT GDI
ULONG_PTR gdiplusToken;
GdiplusStartupInput gdiplusStartupInput;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
if (!gdiplusToken) return 3;
// Gdip_GetRotatedDimensions:
GpBitmap* pBitmap;
int result = Gdiplus::DllExports::GdipCreateBitmapFromHBITMAP(HBitmap, 0, &pBitmap);
Then I calculate the variables needed for rotation. Then I create graphics object and tried to rotate the image:
GpGraphics * pG;
result = Gdiplus::DllExports::GdipGetImageGraphicsContext(pBitmap, &pG);
Gdiplus::SmoothingMode smooth = SmoothingModeHighQuality;
result = Gdiplus::DllExports::GdipSetSmoothingMode(pG, smooth);
Gdiplus::InterpolationMode interpolation = InterpolationModeNearestNeighbor;
result = Gdiplus::DllExports::GdipSetInterpolationMode(pG, interpolation);
MatrixOrder MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipTranslateWorldTransform(pG, xTranslation, yTranslation, MatrixOrder_);
MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipRotateWorldTransform(pG, ROTATION_ANGLE, MatrixOrder_);
GpImageAttributes * ImgAttributes;
result = Gdiplus::DllExports::GdipCreateImageAttributes(&ImgAttributes); // create an ImageAttribute object
result = Gdiplus::DllExports::GdipDrawImageRectRect(pG,pBitmap,0,0,w,h,0,0,w,h,UnitPixel,ImgAttributes,0,0); // Draw the original image onto the new bitmap
result = Gdiplus::DllExports::GdipDisposeImageAttributes(ImgAttributes);
Finally I wanted to check the image so I added:
CLSID pngClsid;
GetEncoderClsid(L"image/png", &pngClsid);
result = Gdiplus::DllExports::GdipCreateBitmapFromGraphics(w, h, pG, &pBitmap);
result = Gdiplus::DllExports::GdipSaveImageToFile(pBitmap, L"justest.png", &pngClsid, NULL); // last voluntary? GDIPCONST EncoderParameters* encoderParams
But my image is blank. I found out that GdipCreateBitmapFromGraphics creates blank image, but how should I finish it to check what drawings I have done? Are these steps correct (not just here but above, near GdipCreateBitmapFromHBITMAP() and GdipGetImageGraphicsContext() or I need to add something? How to get it working?
PS: I am sure that HBitmap contains picture of window, I already checked it.
To my eyes, you have some things backwards in your approach.
What you need to do is the following:
Read in your Image (src)
Find the minimum bounding rectangle that will contain the rotated image (ie, rotate the corners and the distance between min and max x and y are the dimensions.
Create a new Image object with these dimensions and the pixel format you want (likely the same as src, but maybe you want an alpha channel) and background color you want (dst)
Create a graphics based on dst (new Graphics(dst))
Set the appropriate transform on the graphics
Draw src onto dst
Export dst
The good news is that to make sure you're doing things right, you can isolate steps out.
For example, you can just make an image and a graphics and draw a line on it with no transform (or better a box with an X) and save that. If you have what you expect, then you're on the right path. Next add a transform to the box. In your case you'll need both a rotation and a translation. Next get the dimensions of the dest image right for that rotation (protip: don't use a square to test). Finally, do it with your actual image.
This will get you step-by-step to the correct output instead of trying to get the whole thing in one shot.

How to create one bitmap from parts of many textures (C++, SDL 2)?

I have *.png files and I want to get different 8x8 px parts from textures and place them on bitmap (SDL_Surface, I guess, but maybe not), smth like this:
Now I'm rendering that without bitmap, i.e. I call each texture and draw part directly on screen each frame, and it's too slow. I guess I need to load each *.png to separate bitmap and use them passing video memory, then call just one big bitmap, but maybe I'm wrong. I need the fastest way of doing that, I need code of this (SDL 2, not SDL 1.3).
Also maybe I need to use clear OpenGL here?
Update:
Or maybe I need to load *.png's to int arrays somehow and use them just like usual numbers and place them to one big int array, and then convert it to SDL_Surface/SDL_Texture? It seems this is the best way, but how to write this?
Update 2:
Colors of pixels in each block are not the same as it presented at the picture and also can they be transparent. Picture is just an example.
Assumming you already have your bitmaps loaded up as SDL_Texture(s), composing them into a different texture is done via SDL_SetRenderTarget .
SDL_SetRenderTarget(renderer, target_texture);
SDL_RenderCopy(renderer, texture1, ...);
SDL_RenderCopy(renderer, texture2, ...);
...
SDL_SetRenderTarget(renderer, NULL);
Every render operation you perform between setting your Render Target and resetting it (by calling SDL_SetRenderTarget with a NULL texture parameter) will be renderer to the designated texture. You can then use this texture as you would use any other.
Ok so, when I asked about "solid colour", I meant - "in that 8x8 pixel area in the .png that you are copying from, do all 64 pixels have the same identical RGB value?" It looks that way in your diagram, so how about this:
How about creating an SDL_Surface, and directly painting 8x8 pixel areas of the memory pointed to by the pixels member of that SDL_Surface with the values read from the original .png.
And then when you're done, convert that surface to an SDL_Texture and render that?
You would avoid all the SDL_UpdateTexture() calls.
Anyway here is some example code. Let's say that you create a class called EightByEight.
class EightByEight
{
public:
EightByEight( SDL_Surface * pDest, Uint8 r, Uint8 g, Uint8 b):
m_pSurface(pDest),
m_red(r),
m_green(g),
m_blue(b){}
void BlitToSurface( int column, int row );
private:
SDL_Surface * m_pSurface;
Uint8 m_red;
Uint8 m_green;
Uint8 m_blue;
};
You construct an object of type EightByEight by passing it a pointer to an SDL_Surface and also some values for red, green and blue. This RGB corresponds to the RGB value taken from the particular 8x8 pixel area of the .png you are currently reading from. You will paint a particular 8x8 pixel area of the SDL_Surface pixels with this RGB value.
So now when you want to paint an area of the SDL_Surface, you use the function BlitToSurface() and pass in a column and row value. For example, if you divided the SDL_Surface into 8x8 pixel squares, BlitToSurface(3,5) means the paint the square at the 4th column, and 5th row with the RGB value that I set on construction.
The BlitToSurface() looks like this:
void EightByEight::BlitToSurface(int column, int row)
{
Uint32 * pixel = (Uint32*)m_pSurface->pixels+(row*(m_pSurface->pitch/4))+column;
// now pixel is pointing to the first pixel in the correct 8x8 pixel square
// of the Surface's pixel memory. Now you need to paint a 8 rows of 8 pixels,
// but be careful - you need to add m_pSurface->pitch - 8 each time
for(int y = 0; y < 8; y++)
{
// paint a row
for(int i = 0; i < 8; i++)
{
*pixel++ = SDL_MapRGB(m_pSurface->format, m_red, m_green, m_blue);
}
// advance pixel pointer by pitch-8, to get the next "row".
pixel += (m_pSurface->pitch - 8);
}
}
I'm sure you could probably speed things up further by pre-calculating an RGB value on construction. Or if you're reading a pixel from the texture, you could probably dispense with the SDL_MapRGB() (but it's just there in case the Surface has different pixel format to the .png).
memcpy is probably faster than 8 individual assignments to the RGB value - but I just want to demonstrate the technique. You could experiment.
So, all the EightByEight objects you create, all point to the same SDL_Surface.
And then, when you're done, you just convert that SDL_Surface to an SDL_Texture and blit that.
Thanks to everyone who took part, but we solved it with my friends. So here is an example (source code is too big and unnecessary here, I'll just describe the main idea):
int pitch, *pixels;
SDL_Texture *texture;
...
if (!SDL_LockTexture(texture, 0, (void **)&pixels, &pitch))
{
for (/*Conditions*/)
memcpy(/*Params*/);
SDL_UnlockTexture(texture);
}
SDL_RenderCopy(renderer, texture, 0, 0);

opengl video freeze

I have an IDS ueye cam and proceed the capture via PBO to OpenGL (OpenTK). On my developer-pc it works great, but on slower machines the video freezes after some time.
Code for allocating memory via opengl and map to ueye, so camera saves processed images in here:
// Generate PBO and save id
GL.GenBuffers(1, out this.frameBuffer[i].BufferID);
// Define the type of the buffer.
GL.BindBuffer(BufferTarget.PixelUnpackBuffer, this.frameBuffer[i].BufferID);
// Define buffer size.
GL.BufferData(BufferTarget.PixelUnpackBuffer, new IntPtr(width * height * depth), IntPtr.Zero, BufferUsageHint.StreamDraw);
// Get pointer to by openGL allocated buffer and
// lock global with uEye.
this.frameBuffer[i].PointerToNormalMemory = GL.MapBuffer(BufferTarget.PixelUnpackBuffer, BufferAccess.WriteOnly);
this.frameBuffer[i].PointerToLockedMemory = uEye.GlobalLock(this.frameBuffer[i].PointerToNormalMemory);
// Unmap PBO after use.
GL.UnmapBuffer(BufferTarget.PixelUnpackBuffer);
// Set selected PBO to none.
GL.BindBuffer(BufferTarget.PixelUnpackBuffer, 0);
// Register buffer to uEye
this.Succeeded("SetAllocatedImageMem", this.cam.SetAllocatedImageMem(width, height, depth, this.frameBuffer[i].PointerToLockedMemory, ref this.frameBuffer[i].MemId));
// Add buffer to uEye-Ringbuffer
this.Succeeded("AddToSequence", this.cam.AddToSequence(this.frameBuffer[i].PointerToLockedMemory, this.frameBuffer[i].MemId));
To copy the image from pbo to an texture (Texture is created and ok):
// Select PBO with new video image
GL.BindBuffer(BufferTarget.PixelUnpackBuffer, nextBufferId);
// Select videotexture as current
GL.BindTexture(TextureTarget.Texture2D, this.videoTextureId);
// Copy PBO to texture
GL.TexSubImage2D(
TextureTarget.Texture2D,
0,
0,
0,
nextBufferSize.Width,
nextBufferSize.Height,
OpenTK.Graphics.OpenGL.PixelFormat.Bgr,
PixelType.UnsignedByte,
IntPtr.Zero);
// Release Texture
GL.BindTexture(TextureTarget.Texture2D, 0);
// Release PBO
GL.BindBuffer(BufferTarget.PixelUnpackBuffer, 0);
Maybe someone can see the mistake... After about 6 seconds the ueye events don't deliver any images any more. When I remove TexSubImage2D it works well, but of course no image appears.
Is there maybe a lock or something from opengl?
Thanks in advance - Thomas
it seems like a shared buffer problem. you may try to implement a simple queue mechanism to get rid of that problem.
sample code (not meant to be working):
queue< vector<BYTE> > frames;
...
frames.push(vector<BYTE>(frameBuffer, frameBuffer + frameSize));
...
// use frame here at GL.TexSubImage2D using frames.front()
frames.pop();
Found the failure by myself. Just replace in the code above StreamDraw with StreamRead.
GL.BufferData(BufferTarget.PixelUnpackBuffer, new IntPtr(width * height * depth), IntPtr.Zero, BufferUsageHint.StreamRead);