How to use CGContextDrawImage? - quartz-2d

I need some help using the CGContextDrawImage. I have the following code which will create a Bitmap context and convert the pixel data to CGImageRef. Now I need to display that image using CGContextDrawImage. I'm not very clear on how I'm supposed to use that. The following is my code:
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
//What code should go here to display the image?
CGContextRelease(gtx);
CGImageRelease(myimage);
}
Any help or a sample piece of code would be great. Thanks in advance!

Create a file named : MyDrawingView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#interface MyDrawingView : UIView
{
}
#end
now create a file named : MyDrawingView.m
#import "MyChartView.h"
#implementation MyDrawingView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
// Write your initialization code if any.
}
return self;
}
// Only override drawRect: if you perform custom drawing.
- (void)drawRect:(CGRect)rect
{
// Drawing code
// Create Current Context To Draw
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Draws your image at given poing
[image drawAtPoint:CGPointMake(10, 10)];
}
// now to use it in your view
#import "MyChartView.h"
MyDrawingView *drawView = [[MyDrawingView alloc] initWithFrame:CGRectMake(0, 0, 200, 200)];
[self.view addSubview:drawView];
// whenever you want to update that view call
[drawView setNeedsDisplay];
// its your method to process(draw) image
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
// Convert to UIImage
UIImage *image = [UIImage imageWithCGImage:myimage];
// Create a rect to display
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Here is the Two Snnipets to draw image
CGContextRef context = UIGraphicsGetCurrentContext();
// Transform image
CGContextTranslateCTM(context, 0, image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Finaly Draw your image
CGContextDrawImage(context, imageRect, image.CGImage);
// You can also use following to draw your image in 'drawRect' method
// [[UIImage imageWithCGImage:myimage] drawInRect:CGRectMake(0, 0, 145, 15)];
CGContextRelease(gtx);
CGImageRelease(myimage);
}

If there is anyone else dealing with this problem, please try inserting the following code into Ereka's code. Dipen's solution seems a little too much. Right after the comment "//What code should go here to display the image," put the following code:
CGRect myContextRect = CGRectMake (0, 0, width, height);
CGContextDrawImage (gtx, myContextRect, myimage);
CGImageRef imageRef = CGBitmapContextCreateImage(gtx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
UIImageView *imgView = [[UIImageView alloc] initWithImage:finalImage];
[self.view addSubview:imgView];

Related

Conversion of Mat to UIImage from CvVideoCamera

I am storing frames in real time from a CvVideoCamera using the - (void)processImage:(Mat&)image delegate method.
After storing the images I average them all into one image to replicate a long exposure shot using this code:
Mat merge(Vector<Mat> frames, double alpha)
{
Mat firstFrame = frames.front();
Mat exposed = firstFrame * alpha;
for (int i = 1; i < frames.size(); i++) {
Mat frame = frames[i];
exposed += frame * alpha;
}
return exposed;
}
After getting the averaged image back I convert it back to a UIImage, but the image I get back is in a strange color space, does anyone know how I can fix this?
Conversion code:
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if ( cvMat.elemSize() == 1 ) {
colorSpace = CGColorSpaceCreateDeviceGray();
}
else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData( (__bridge CFDataRef)data );
CGImageRef imageRef = CGImageCreate( cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault );
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorSpace );
return finalImage;
}
Here is an example(note: the plane in the middle is because I am recording off a computer monitor)

OpenCV : k-means clustering in iOS

Now I am trying to do k-means clustering in iOS. To do k-means, I converted from UIImage to cv::Mat and made function to cluster cv::Mat. The function does not work well.
The result looks like almost good, but right side cols get black. I read openCV reference and I have no idea what's wrong.
The code is below. If someone help me, it is going to be really appriciated.
Excuse for my poor English...
- (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
- (cv::Mat)kMeansClustering:(cv::Mat)input
{
cv::Mat samples(input.rows * input.cols, 3, CV_32F);
for( int y = 0; y < input.rows; y++ ){
for( int x = 0; x < input.cols; x++ ){
for( int z = 0; z < 3; z++){
samples.at<float>(y + x*input.rows, z) = input.at<cv::Vec3b>(y,x)[z];
}
}
}
int clusterCount = 20;
cv::Mat labels;
int attempts = 5;
cv::Mat centers;
kmeans(samples, clusterCount, labels, cv::TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 100, 0.01), attempts, cv::KMEANS_PP_CENTERS, centers );
cv::Mat new_image( input.rows, input.cols, input.type());
for( int y = 0; y < input.rows; y++ ){
for( int x = 0; x < input.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*input.rows,0);
new_image.at<cv::Vec3b>(y,x)[0] = centers.at<float>(cluster_idx, 0);
new_image.at<cv::Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
new_image.at<cv::Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
}
}
return new_image;
}
You are supplying kMeansClustering with a four-channel image, it wants 3 channels. Try losing the alpha channel.
Add this at the top of the function:
cv::cvtColor(input , input , CV_RGBA2RGB);
so it looks like this:
- (cv::Mat)kMeansClustering:(cv::Mat)input
{
cv::cvtColor(input , input , CV_RGBA2RGB);
cv::Mat samples(input.rows * input.cols, 3, CV_32F);

How to colorize a grayscale CCSprite in cocos2d?

I want to colorize a grayscale CCSprite in cocos2d by programming, for an example, the grayscale CCSprite is:
http://cc.cocimg.com/bbs/attachment/Fid_18/18_171059_43b48b6f1bc40a5.png
the output CCSprite is:
http://cc.cocimg.com/bbs/attachment/Fid_18/18_171059_574b2f34cb78b49.png
But I can not get the correct result.
If I use [CCSprite setColor], I got a CCSprite is dark
I used CCRenderTexture and tried two different blendFunc,
-(CCSprite *) try1:(CCSprite *)gray color:(ccColor3B) color
{
CCRenderTexture *rtx = [CCRenderTexture renderTextureWithWidth:gray.contentSize.width height:gray.contentSize.height];
ccColor4F c = ccc4FFromccc3B(color);
[rtx beginWithClear:c.r g:c.g b:c.b a:c.a];
gray.position = ccp(gray.contentSize.width/2, gray.contentSize.height/2);
gray.blendFunc = (ccBlendFunc){GL_ONE_MINUS_SRC_COLOR, GL_SRC_COLOR};
[gray visit];
[rtx end];
CCSprite *sp = [CCSprite spriteWithTexture:rtx.sprite.texture];
sp.flipY = YES;
return sp;
}
the other is:
-(CCSprite *) try2:(CCSprite *)gray color:(ccColor3B) color
{
CCRenderTexture *rtx = [CCRenderTexture renderTextureWithWidth:gray.contentSize.width height:gray.contentSize.height];
ccColor4F c = ccc4FFromccc3B(color);
[rtx beginWithClear:c.r g:c.g b:c.b a:0];
gray.position = ccp(gray.contentSize.width/2, gray.contentSize.height/2);
gray.blendFunc = (ccBlendFunc){GL_SRC_COLOR, GL_SRC_ALPHA};
[gray visit];
[rtx end];
CCSprite *sp = [CCSprite spriteWithTexture:rtx.sprite.texture];
sp.flipY = YES;
return sp;
}
and I even tried shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_texture;
varying vec2 v_texCoord;
varying vec4 v_fragmentColor;
uniform vec4 u_fillcolor;
#pragma debug(on)
void main(void)
{
vec4 c = texture2D(u_texture, v_texCoord);
float r;
float g;
float b;
r = u_fillcolor.r + (2.0 * c.r - 1.0);
g = u_fillcolor.g + (2.0 * c.g - 1.0);
b = u_fillcolor.b + (2.0 * c.b - 1.0);
gl_FragColor = vec4(r,g,b, c.a);
}
but the outputs are not correct.
Could you help me find a way to output the correct CCSprite?
Thanks.
(I am a new user of stackoverflow, my reputation is not enough for adding images, so I have to add images as links.)
If you are not constantly changing the colors this method will work.
typedef struct _SOARGB {
unsigned char a;
unsigned char r;
unsigned char g;
unsigned char b;
} SOARGB;
+(CGImageRef)colorizeImage:(CGImageRef)image {
CGContextRef contextRef = [self createARGBBitmapContext:image];
NSAssert(contextRef != nil, #"Failed to create context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context.
CGContextDrawImage(contextRef, rect, image);
// Get a pointer to the image data associated with the bitmap context.
SOARGB *data = (SOARGB*)CGBitmapContextGetData (contextRef);
NSAssert(data != NULL, #"Could not retrieve pixel data from context");
//
// Do the colorization job.
//
// This colorization method is ONLY an example and
// should probably be changed!
SOARGB tint;
tint.r = 255;
tint.g = 0;
tint.b = 0;
tint.a = 0;
for (int i = 0; i < width*height; i++) {
SOARGB *c = &data[i];
CGFloat alpha = c->r / 255.0;
c->r -= (c->r - tint.r)*0.3*alpha*alpha;
c->g -= (c->g - tint.g)*0.3*alpha*alpha;
c->b -= (c->b - tint.b)*0.3*alpha*alpha;
}
// End of coloriation job
// Create image from modified context
return CGBitmapContextCreateImage(contextRef);
}
+(CGContextRef)createARGBBitmapContext:(CGImageRef)inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// zeroing out the data
memset(bitmapData, 0, bitmapByteCount);
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
To load a sprite using the above method:
UIImage *image = [UIImage imageNamed:#"myImage.png"];
CGImageRef colorizedImageRef = [self colorizeImage:image.CGImage];
CCSprite *colorizedSprite = [CCSprite spriteWithCGImage:colorizedImageRef key:#"myImage"];
Hope this helps.

Pixel perfect collision not working if more than two sprites(from spritesheet) appears on scene iphone?

I'm developing game in iPhone in that pixel perfect collision will work only if one sprite appears on scene otherwise it wont work.can you please provide me some information?
I used this code for pixel perfect collision between animated sprites(spritesheet).
-(BOOL) isCollisionBetweenSpriteA:(CCSprite*)spr1 spriteB:(CCSprite*)spr2 pixelPerfect:(BOOL)pp
{
BOOL isCollision = NO;
CGRect intersection = CGRectIntersection([spr1 boundingBox], [spr2 boundingBox]);
// Look for simple bounding box collision
if (!CGRectIsEmpty(intersection))
{
// If we're not checking for pixel perfect collisions, return true
if (!pp) {return YES;}
CGPoint spr1OldPosition = spr1.position;
CGPoint spr2OldPosition = spr2.position;
spr1.position = CGPointMake(spr1.position.x - intersection.origin.x, spr1.position.y - intersection.origin.y);
spr2.position = CGPointMake(spr2.position.x - intersection.origin.x, spr2.position.y - intersection.origin.y);
intersection = CGRectIntersection([spr1 boundingBox], [spr2 boundingBox]);
// Assuming that the spritebatchnode of both sprites is the same, I just use one. If each sprite has a different sprite batch node as parent you should modify the code to get the spriteBatchNode for each sprite and visit them.
CCSpriteBatchNode* _sbnMain =(CCSpriteBatchNode*) spr1.parent;
//NOTE: We are assuming that the spritebatchnode is always at 0,0
// Get intersection info
unsigned int x = (intersection.origin.x)* CC_CONTENT_SCALE_FACTOR();
unsigned int y = (intersection.origin.y)* CC_CONTENT_SCALE_FACTOR();
unsigned int w = intersection.size.width* CC_CONTENT_SCALE_FACTOR();
unsigned int h = intersection.size.height* CC_CONTENT_SCALE_FACTOR();
unsigned int numPixels = w * h;// * CC_CONTENT_SCALE_FACTOR();
// create render texture and make it visible for testing purposes
int renderWidth = w+1;
int renderHeight = h+1;
if(renderWidth<32)
{
renderWidth =32;
}
if(renderHeight < 32)
{
renderHeight =32;
}
renderTexture = [[CCRenderTexture alloc] initWithWidth:renderWidth height:renderHeight pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
//rt is always going to be at 0,0 - can't change it.
renderTexture.position = CGPointMake(0, 0);
[self addChild:renderTexture];
renderTexture.visible = NO;
//NSLog(#"\nintersection = (%u,%u,%u,%u), area = %u",x,y,w,h,numPixels);
// Draw into the RenderTexture
[renderTexture beginWithClear:0 g:0 b:0 a:0];
// Render both sprites: first one in RED and second one in GREEN
glColorMask(1, 0, 0, 1);
[_sbnMain visitSprite:spr1];
glColorMask(0, 1, 0, 1);
[_sbnMain visitSprite:spr2];
glColorMask(1, 1, 1, 1);
// Get color values of intersection area
ccColor4B *buffer = malloc( sizeof(ccColor4B) * numPixels );
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
[renderTexture end];
// Read buffer
unsigned int step = 1;
for(unsigned int i=0; i<numPixels; i+=step)
{
ccColor4B color = buffer[i];
if (color.r > 0 && color.g > 0)
{
isCollision = YES;
break;
}
}
// Free buffer memory
free(buffer);
spr1.position = spr1OldPosition;
spr2.position = spr2OldPosition;
[renderTexture release];
[self removeChild:renderTexture cleanup:YES];
} return isCollision;}

call c++ class within objective c one

I wrote c++ class that return uiimage (I made sure that it doesn't return null) and I am trying to call it from objective c obe (uiview based classs) as below code, the problem is that no image be displayed any suggestion to solve that , also when put the code of c++ class at the objective c class the image appear
images imaging = new images(); // the c++ class .mm file
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask,
YES);
NSString *fullPath = [[paths lastObject] stringByAppendingPathComponent:ImageFileName];
NSLog(fullPath);
#try {
UIView* holderView = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.view.bounds.size.width,self.view.bounds.size.height)];
viewer = [[UIImageView alloc] initWithFrame:[holderView frame]];
[viewer setImage:imaging->LoadImage(fullPath)];
viewer.contentMode = UIViewContentModeScaleAspectFit;
//holderView.contentMode = UIViewContentModeScaleAspectFit ;
[holderView addSubview:viewer];
[self.view addSubview:holderView];
}
#catch (NSException * e) {
NSLog(#"this was what happened: %a ",e);
}
this is a part of the loadimage, which create image from the buffer data
try{
...............
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,MyFilter->GetOutput()->GetBufferPointer(), bufferLength, NULL);
if(colorSpaceRef == NULL)
{
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
}
CGImageRef iref = CGImageCreate(Width,Height,bitsPerComponent,bitsPerPixel,bytesPerRow, colorSpaceRef,bitmapInfo,provider,NULL,YES,renderingIntent);
CGContextRef context = NULL;
if ((Comptype==itk::ImageIOBase::SHORT)||(Comptype==itk::ImageIOBase::USHORT))
{
context = CGBitmapContextCreate(MyFilter->GetOutput()->GetBufferPointer(),
Width,
Height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
}
else
{
context = CGBitmapContextCreate(myimg->GetBufferPointer(),
Width,
Height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
}
if(context == NULL)
{
NSLog(#"Error context not created");
}
//Create the UIImage to be displayed
UIImage * uiimage = nil;
if (context)
{
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, Width, Height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
//if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
//float scale = [[UIScreen mainScreen] scale];
//uiimage = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
//} else {
uiimage = [UIImage imageWithCGImage:imageRef];
//}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
return uiimage;
//return uiimage;
}
catch (itk::ExceptionObject & e)
{
throw(e);
}
The code looks okay - the Objective C, Objective C++ and C++ code should all work together fine, so I'd guess the problem may be within images::LoadImage(), even though it works separately - could we see that function?
Did you confirm that LoadImage returns UIImage object?
UIImage *image = imaging->LoadImage(fullPath)];
NSLog(#"className of image is \"%#\"", [image className]);
[viewer setImage:image];
Sometimes the Obj-C objects created by class functions in Cocoa are autoreleased. You may need to do:
uiimage = [UIImage imageWithCGImage:imageRef];
[uiimage retain];
Since this is on iOS I believe this is required, since iOS does not have garbage collection like OS X Cocoa.