Get RGB intensity from image swiftUI - swiftui

I know that the question is already asked but I don't understand.
I want to create a function:
func getRGBAsFromImage( )-> (NSArray*){}
But I don't arrive to implement that with this answer.
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)x andY:(int)y count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
NSUInteger byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
for (int i = 0 ; i < count ; ++i)
{
CGFloat alpha = ((CGFloat) rawData[byteIndex + 3] ) / 255.0f;
CGFloat red = ((CGFloat) rawData[byteIndex] ) / alpha;
CGFloat green = ((CGFloat) rawData[byteIndex + 1] ) / alpha;
CGFloat blue = ((CGFloat) rawData[byteIndex + 2] ) / alpha;
byteIndex += bytesPerPixel;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
How to do that?

Related

How to use CGContextDrawImage?

I need some help using the CGContextDrawImage. I have the following code which will create a Bitmap context and convert the pixel data to CGImageRef. Now I need to display that image using CGContextDrawImage. I'm not very clear on how I'm supposed to use that. The following is my code:
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
//What code should go here to display the image?
CGContextRelease(gtx);
CGImageRelease(myimage);
}
Any help or a sample piece of code would be great. Thanks in advance!
Create a file named : MyDrawingView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#interface MyDrawingView : UIView
{
}
#end
now create a file named : MyDrawingView.m
#import "MyChartView.h"
#implementation MyDrawingView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
// Write your initialization code if any.
}
return self;
}
// Only override drawRect: if you perform custom drawing.
- (void)drawRect:(CGRect)rect
{
// Drawing code
// Create Current Context To Draw
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Draws your image at given poing
[image drawAtPoint:CGPointMake(10, 10)];
}
// now to use it in your view
#import "MyChartView.h"
MyDrawingView *drawView = [[MyDrawingView alloc] initWithFrame:CGRectMake(0, 0, 200, 200)];
[self.view addSubview:drawView];
// whenever you want to update that view call
[drawView setNeedsDisplay];
// its your method to process(draw) image
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
// Convert to UIImage
UIImage *image = [UIImage imageWithCGImage:myimage];
// Create a rect to display
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Here is the Two Snnipets to draw image
CGContextRef context = UIGraphicsGetCurrentContext();
// Transform image
CGContextTranslateCTM(context, 0, image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Finaly Draw your image
CGContextDrawImage(context, imageRect, image.CGImage);
// You can also use following to draw your image in 'drawRect' method
// [[UIImage imageWithCGImage:myimage] drawInRect:CGRectMake(0, 0, 145, 15)];
CGContextRelease(gtx);
CGImageRelease(myimage);
}
If there is anyone else dealing with this problem, please try inserting the following code into Ereka's code. Dipen's solution seems a little too much. Right after the comment "//What code should go here to display the image," put the following code:
CGRect myContextRect = CGRectMake (0, 0, width, height);
CGContextDrawImage (gtx, myContextRect, myimage);
CGImageRef imageRef = CGBitmapContextCreateImage(gtx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
UIImageView *imgView = [[UIImageView alloc] initWithImage:finalImage];
[self.view addSubview:imgView];

OpenCV : k-means clustering in iOS

Now I am trying to do k-means clustering in iOS. To do k-means, I converted from UIImage to cv::Mat and made function to cluster cv::Mat. The function does not work well.
The result looks like almost good, but right side cols get black. I read openCV reference and I have no idea what's wrong.
The code is below. If someone help me, it is going to be really appriciated.
Excuse for my poor English...
- (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
- (cv::Mat)kMeansClustering:(cv::Mat)input
{
cv::Mat samples(input.rows * input.cols, 3, CV_32F);
for( int y = 0; y < input.rows; y++ ){
for( int x = 0; x < input.cols; x++ ){
for( int z = 0; z < 3; z++){
samples.at<float>(y + x*input.rows, z) = input.at<cv::Vec3b>(y,x)[z];
}
}
}
int clusterCount = 20;
cv::Mat labels;
int attempts = 5;
cv::Mat centers;
kmeans(samples, clusterCount, labels, cv::TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 100, 0.01), attempts, cv::KMEANS_PP_CENTERS, centers );
cv::Mat new_image( input.rows, input.cols, input.type());
for( int y = 0; y < input.rows; y++ ){
for( int x = 0; x < input.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*input.rows,0);
new_image.at<cv::Vec3b>(y,x)[0] = centers.at<float>(cluster_idx, 0);
new_image.at<cv::Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
new_image.at<cv::Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
}
}
return new_image;
}
You are supplying kMeansClustering with a four-channel image, it wants 3 channels. Try losing the alpha channel.
Add this at the top of the function:
cv::cvtColor(input , input , CV_RGBA2RGB);
so it looks like this:
- (cv::Mat)kMeansClustering:(cv::Mat)input
{
cv::cvtColor(input , input , CV_RGBA2RGB);
cv::Mat samples(input.rows * input.cols, 3, CV_32F);

How to colorize a grayscale CCSprite in cocos2d?

I want to colorize a grayscale CCSprite in cocos2d by programming, for an example, the grayscale CCSprite is:
http://cc.cocimg.com/bbs/attachment/Fid_18/18_171059_43b48b6f1bc40a5.png
the output CCSprite is:
http://cc.cocimg.com/bbs/attachment/Fid_18/18_171059_574b2f34cb78b49.png
But I can not get the correct result.
If I use [CCSprite setColor], I got a CCSprite is dark
I used CCRenderTexture and tried two different blendFunc,
-(CCSprite *) try1:(CCSprite *)gray color:(ccColor3B) color
{
CCRenderTexture *rtx = [CCRenderTexture renderTextureWithWidth:gray.contentSize.width height:gray.contentSize.height];
ccColor4F c = ccc4FFromccc3B(color);
[rtx beginWithClear:c.r g:c.g b:c.b a:c.a];
gray.position = ccp(gray.contentSize.width/2, gray.contentSize.height/2);
gray.blendFunc = (ccBlendFunc){GL_ONE_MINUS_SRC_COLOR, GL_SRC_COLOR};
[gray visit];
[rtx end];
CCSprite *sp = [CCSprite spriteWithTexture:rtx.sprite.texture];
sp.flipY = YES;
return sp;
}
the other is:
-(CCSprite *) try2:(CCSprite *)gray color:(ccColor3B) color
{
CCRenderTexture *rtx = [CCRenderTexture renderTextureWithWidth:gray.contentSize.width height:gray.contentSize.height];
ccColor4F c = ccc4FFromccc3B(color);
[rtx beginWithClear:c.r g:c.g b:c.b a:0];
gray.position = ccp(gray.contentSize.width/2, gray.contentSize.height/2);
gray.blendFunc = (ccBlendFunc){GL_SRC_COLOR, GL_SRC_ALPHA};
[gray visit];
[rtx end];
CCSprite *sp = [CCSprite spriteWithTexture:rtx.sprite.texture];
sp.flipY = YES;
return sp;
}
and I even tried shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_texture;
varying vec2 v_texCoord;
varying vec4 v_fragmentColor;
uniform vec4 u_fillcolor;
#pragma debug(on)
void main(void)
{
vec4 c = texture2D(u_texture, v_texCoord);
float r;
float g;
float b;
r = u_fillcolor.r + (2.0 * c.r - 1.0);
g = u_fillcolor.g + (2.0 * c.g - 1.0);
b = u_fillcolor.b + (2.0 * c.b - 1.0);
gl_FragColor = vec4(r,g,b, c.a);
}
but the outputs are not correct.
Could you help me find a way to output the correct CCSprite?
Thanks.
(I am a new user of stackoverflow, my reputation is not enough for adding images, so I have to add images as links.)
If you are not constantly changing the colors this method will work.
typedef struct _SOARGB {
unsigned char a;
unsigned char r;
unsigned char g;
unsigned char b;
} SOARGB;
+(CGImageRef)colorizeImage:(CGImageRef)image {
CGContextRef contextRef = [self createARGBBitmapContext:image];
NSAssert(contextRef != nil, #"Failed to create context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context.
CGContextDrawImage(contextRef, rect, image);
// Get a pointer to the image data associated with the bitmap context.
SOARGB *data = (SOARGB*)CGBitmapContextGetData (contextRef);
NSAssert(data != NULL, #"Could not retrieve pixel data from context");
//
// Do the colorization job.
//
// This colorization method is ONLY an example and
// should probably be changed!
SOARGB tint;
tint.r = 255;
tint.g = 0;
tint.b = 0;
tint.a = 0;
for (int i = 0; i < width*height; i++) {
SOARGB *c = &data[i];
CGFloat alpha = c->r / 255.0;
c->r -= (c->r - tint.r)*0.3*alpha*alpha;
c->g -= (c->g - tint.g)*0.3*alpha*alpha;
c->b -= (c->b - tint.b)*0.3*alpha*alpha;
}
// End of coloriation job
// Create image from modified context
return CGBitmapContextCreateImage(contextRef);
}
+(CGContextRef)createARGBBitmapContext:(CGImageRef)inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// zeroing out the data
memset(bitmapData, 0, bitmapByteCount);
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
To load a sprite using the above method:
UIImage *image = [UIImage imageNamed:#"myImage.png"];
CGImageRef colorizedImageRef = [self colorizeImage:image.CGImage];
CCSprite *colorizedSprite = [CCSprite spriteWithCGImage:colorizedImageRef key:#"myImage"];
Hope this helps.

SDL_surface that contains several images

Suppose I have a SDL_Surface that is just one image.
What if I wanted to make that SDL_Surface have three copies of that image, one below the other?
I came up with this function, but it doesn't show anything:
void ElementView::adjust()
{
int imageHeight = this->img->h;
int desiredHeight = 3*imageHeight;
int repetitions = desiredHeight / imageHeight ;
int remainder = desiredHeight % imageHeight ;
SDL_Surface* newSurf = SDL_CreateRGBSurface(img->flags, img->w, desiredHeight, 32, img->format->Rmask, img->format->Gmask, img->format->Bmask,img->format->Amask);
SDL_Rect rect;
memset(&rect, 0, sizeof(SDL_Rect));
rect.w = this->img->w;
rect.h = this->img->h;
for (int i = 0 ; i < repetitions ; i++)
{
rect.y = i*imageHeight;
SDL_BlitSurface(img,NULL,newSurf,&rect);
}
rect.y += remainder;
SDL_BlitSurface(this->img,NULL,newSurf,&rect);
if (newSurf != NULL) {
SDL_FreeSurface(this->img);
this->img = newSurf;
}
}
I think you should
Create a new surface that is 3 times as long as the initial one
Copy from img to the new surface using code similar to what you have (SDL_BlitSurface), except having the destination as your new surface
SDL_FreeSurface on your original img
Assign your new surface to img
Edit: Here is some sample code, didn't have time to test it though...
void adjust(SDL_Surface** img)
{
SDL_PixelFormat *fmt = (*img)->format;
SDL_Surface* newSurf = SDL_CreateRGBSurface((*img)->flags, (*img)->w, (*img)->h * 3, fmt->BytesPerPixel * 8, fmt->Rmask, fmt->Gmask, fmt->Bmask, fmt->Amask);
SDL_Rect rect;
memset(&rect, 0, sizeof(SDL_Rect));
rect.w = (*img)->w;
rect.h = (*img)->h;
int i = 0;
for (i ; i < 3; i++)
{
SDL_BlitSurface(*img,NULL,newSurf,&rect);
rect.y += (*img)->h;
}
SDL_FreeSurface(*img);
*img = newSurf;
}

Outline (stroke) for non-rectangular CCNode in cocos2d

I need to create an outline like this dynamically:
Not for a CCSprite, but for multiple animated CCSprites united in one CCNode. I'm thinking about:
copying CCNode's content to a texture (like canvasBitmapData.draw(sourceDisplayObject) in AS3)
creating CCSprite with the resulting texture
tinting the sprite to outline color and scaling it up a bit
placing the sprite behind other sprites in the node
I have no idea how to perform step 1. And maybe it is faster to draw "true stroke" around the texture's opaque pixels instead of tint-scale in step 3?
I totally forgot to post an answer for this question. Here's the code for a very smooth stroke. It's not fast but worked great for a couple of big sprites on the first iPad.
The idea is to draw tiny colored and blurred balls around the sprite and place them onto their own texture. It can be used both for CCNode and CCSprite. The code also shifts anchor points because the resulting sprites will have a bit larger width and height.
Resulting outline (body and 2 hands, about 0.3s on iPad1):
White balls examples:
5f: http://i.stack.imgur.com/e9kos.png
10f: http://i.stack.imgur.com/S5goU.png
20f: http://i.stack.imgur.com/qk7GL.png
CCNode category, for Cocos2d-iPhone 2.1:
#implementation CCNode (Outline)
- (CCSprite*) outline
{
return [self outlineRect:CGRectMake(0, 0, self.contentSize.width, self.contentSize.height)];
}
- (CCSprite*) outlineRect:(CGRect)rect
{
NSInteger gap = dscale(4);
CGPoint positionShift = ccp(gap - rect.origin.x, gap - rect.origin.y);
CGSize canvasSize = CGSizeMake(rect.size.width + gap * 2, rect.size.height + gap * 2);
CCRenderTexture* renderedSpriteTexture = [self renderTextureFrom:self shiftedFor:positionShift onCanvasSized:canvasSize];
CGSize textureSize = renderedSpriteTexture.sprite.contentSize;
CGSize textureSizeInPixels = renderedSpriteTexture.sprite.texture.contentSizeInPixels;
NSInteger bitsPerComponent = 8;
NSInteger bytesPerPixel = (bitsPerComponent * 4) / 8;
NSInteger bytesPerRow = bytesPerPixel * textureSizeInPixels.width;
NSInteger myDataLength = bytesPerRow * textureSizeInPixels.height;
NSMutableData* buffer = [[NSMutableData alloc] initWithCapacity:myDataLength];
Byte* bytes = (Byte*)[buffer mutableBytes];
[renderedSpriteTexture begin];
glReadPixels(0, 0, textureSizeInPixels.width, textureSizeInPixels.height, GL_RGBA, GL_UNSIGNED_BYTE, bytes);
[renderedSpriteTexture end];
//SEE ATTACHMENT TO GET THE FILES
NSString* spriteFrameName;
if (IS_IPAD) spriteFrameName = (CC_CONTENT_SCALE_FACTOR() == 1) ? #"10f.png" : #"20f.png";
else spriteFrameName = (CC_CONTENT_SCALE_FACTOR() == 1) ? #"5f.png" : #"10f.png";
CCSprite* circle = [CCSprite spriteWithSpriteFrameName:spriteFrameName];
circle.anchorPoint = ccp(0.48, 0.48);
float retinaScale = (CC_CONTENT_SCALE_FACTOR() == 1) ? 1.0 : 0.5;
CCRenderTexture* strokeTexture = [CCRenderTexture renderTextureWithWidth:textureSize.width height:textureSize.height pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[strokeTexture beginWithClear:0 g:0 b:0 a:0];
for (NSInteger x = 0; x < textureSizeInPixels.width; x++)
{
for (NSInteger y = 0; y < textureSizeInPixels.height; y++)
{
NSInteger idx = y * bytesPerRow + x * bytesPerPixel + 3;
NSInteger w = 1;
if (bytes[idx] <= 254)
{
BOOL shouldBeStroked = NO;
for (NSInteger nx = -w; nx <= w; nx++)
{
for (NSInteger ny = -w; ny <= w; ny++)
{
if (x + nx < 0 || y + ny < 0 || x + nx >= textureSizeInPixels.width || y + ny >= textureSizeInPixels.height)
continue;
if (bytes[idx + nx * bytesPerPixel + ny * bytesPerRow] == 255)
{
shouldBeStroked = YES;
break;
}
}
}
if (shouldBeStroked == YES)
{
circle.position = ccp(x * retinaScale, y * retinaScale);
[circle visit];
}
}
}
}
[strokeTexture end];
CCSprite* resultSprite = [CCSprite spriteWithTexture:strokeTexture.sprite.texture];
[resultSprite.texture setAntiAliasTexParameters];
resultSprite.flipY = YES;
if ([self isKindOfClass:[CCSprite class]]) {
CGPoint oldAnchorInPixels = ccp(roundf(self.contentSize.width * self.anchorPoint.x), roundf(self.contentSize.height * self.anchorPoint.y));
resultSprite.anchorPoint = ccp((oldAnchorInPixels.x + gap) / resultSprite.contentSize.width, (oldAnchorInPixels.y + gap) / resultSprite.contentSize.height);
resultSprite.position = self.position;
} else { //CCNode
resultSprite.anchorPoint = CGPointZero;
resultSprite.position = ccpAdd(self.position, ccp(rect.origin.x - gap, rect.origin.y - gap));
}
return resultSprite;
}
- (CCRenderTexture*) renderTextureFrom:(CCNode*)node shiftedFor:(CGPoint)posShift onCanvasSized:(CGSize)size
{
SoftAssertion(!CGSizeEqualToSize(size, CGSizeZero), #"node has zero size");
BOOL isSprite = [node isMemberOfClass:[CCSprite class]];
CGPoint apSave = node.anchorPoint;
CGPoint posSave = node.position;
BOOL wasVisible = node.visible;
CCRenderTexture* rtx = [CCRenderTexture renderTextureWithWidth:size.width
height:size.height
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[rtx beginWithClear:0 g:0 b:0 a:0];
node.anchorPoint = CGPointZero;
node.position = posShift;
node.visible = YES;
if (isSprite) [node visit];
else [[self cloneCCNode:node] visit];
node.anchorPoint = apSave;
node.position = posSave;
node.visible = wasVisible;
[rtx end];
return rtx;
}
- (CCNode*) cloneCCNode:(CCNode*)source
{
CCNode* clone = [CCNode node];
void (^copyCCNodeProperties)(CCNode*, CCNode*) = ^(CCNode* source, CCNode* clone)
{
clone.visible = source.visible;
clone.rotation = source.rotation;
clone.position = source.position;
clone.anchorPoint = source.anchorPoint;
clone.zOrder = source.zOrder;
clone.tag = source.tag;
};
for (CCNode* srcSubnode in source.children) {
CCNode* subNode;
if ([srcSubnode isMemberOfClass:[CCSprite class]]) {
CCSprite* srcSprite = (CCSprite*)srcSubnode;
subNode = [CCSprite spriteWithTexture:srcSprite.texture];
CCSprite* subSprite = (CCSprite*)subNode;
subSprite.flipX = srcSprite.flipX;
subSprite.flipY = srcSprite.flipY;
subSprite.displayFrame = srcSprite.displayFrame;
subSprite.opacity = srcSprite.opacity;
}
else if ([srcSubnode isMemberOfClass:[CCLabelTTF class]]) {
CCLabelTTF* srcLabel = (CCLabelTTF*)srcSubnode;
subNode = [CCLabelTTF labelWithString:srcLabel.string fontName:srcLabel.fontName fontSize:srcLabel.fontSize dimensions:srcLabel.dimensions hAlignment:srcLabel.horizontalAlignment vAlignment:srcLabel.verticalAlignment];
CCSprite* subLabel = (CCSprite*)subNode;
subLabel.flipX = srcLabel.flipX;
subLabel.flipY = srcLabel.flipY;
subLabel.color = srcLabel.color;
}
else {
subNode = [self cloneCCNode:srcSubnode];
}
copyCCNodeProperties(srcSubnode, subNode);
[clone addChild:subNode];
}
copyCCNodeProperties(source, clone);
return clone;
}
I have a general purpose function I built up from various sources (that I'm ashamed to say I can't reference here). What it does is take a CCSprite, create a stroke that you can put behind it and return in a CCRenderTexture. If the sprite passed in has children (as yours might) I see no reason why it wouldn't do what you want, but I haven't tried.
Here it is in case it works:
#implementation Cocosutil
+(CCRenderTexture*) createStrokeForSprite:(CCSprite*)sprite size:(float)size color:(ccColor3B)cor
{
CCRenderTexture* rt = [CCRenderTexture renderTextureWithWidth:sprite.texture.contentSize.width+size*2 height:sprite.texture.contentSize.height+size*2];
CGPoint originalPos = [sprite position];
ccColor3B originalColor = [sprite color];
BOOL originalVisibility = [sprite visible];
[sprite setColor:cor];
[sprite setVisible:YES];
ccBlendFunc originalBlend = [sprite blendFunc];
[sprite setBlendFunc:(ccBlendFunc) { GL_SRC_ALPHA, GL_ONE }];
CGPoint bottomLeft = ccp(sprite.texture.contentSize.width * sprite.anchorPoint.x + size, sprite.texture.contentSize.height * sprite.anchorPoint.y + size);
CGPoint positionOffset = ccp(sprite.texture.contentSize.width * sprite.anchorPoint.x - sprite.texture.contentSize.width/2,sprite.texture.contentSize.height * sprite.anchorPoint.y - sprite.texture.contentSize.height/2);
CGPoint position = ccpSub(originalPos, positionOffset);
[rt begin];
for (int i=0; i<360; i+=30)
{
[sprite setPosition:ccp(bottomLeft.x + sin(CC_DEGREES_TO_RADIANS(i))*size, bottomLeft.y + cos(CC_DEGREES_TO_RADIANS(i))*size)];
[sprite visit];
}
[rt end];
[sprite setPosition:originalPos];
[sprite setColor:originalColor];
[sprite setBlendFunc:originalBlend];
[sprite setVisible:originalVisibility];
[rt setPosition:position];
return rt;
}
#end
and here is code where I use it:
- (id) initWithSprite:(CCSprite*)sprite color:(ccColor3B)color strokeSize:(float)strokeSize strokeColor:(ccColor3B)strokeColor {
self = [super init];
if (self != nil) {
strokeColor_ = strokeColor;
strokeSize_ = strokeSize;
CCRenderTexture *stroke = [CocosUtil createStrokeForSprite:sprite size:strokeSize color:strokeColor];
[self addChild:stroke z:kZStroke tag:kStroke];
[self addChild:sprite z:kZLabel tag:kLabel];
[self setContentSize:[sprite contentSize]];
}
return self;
}