We are using AVCaptureDevice on IOS to scan QR codes. We pass the output of the camera using AVCaptureMetadataOutput to the code to recognize the QR code, and currently we also display the camera as a separate view over our Open GL view. However we now want other graphics to appear over the camera preview, so we would like to be able to get the camera data loaded onto one of our Open GL textures.
So, is there a way to get the raw RGB data from the camera
This is the code (below) we're using to initialise the capture device and views.
How could we modify this to access the RGB data so we can load it onto one of our GL textures ? We're using C++/Objective C
Thanks
Shaun Southern
self.captureSession = [[AVCaptureSession alloc] init];
NSError *error;
// Set camera capture device to default and the media type to video.
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Set video capture input: If there a problem initialising the camera, it will give am error.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!input)
{
NSLog(#"Error Getting Camera Input");
return;
}
// Adding input souce for capture session. i.e., Camera
[self.captureSession addInput:input];
AVCaptureMetadataOutput *captureMetadataOutput = [[AVCaptureMetadataOutput alloc] init];
// Set output to capture session. Initalising an output object we will use later.
[self.captureSession addOutput:captureMetadataOutput];
// Create a new queue and set delegate for metadata objects scanned.
dispatch_queue_t dispatchQueue;
dispatchQueue = dispatch_queue_create("scanQueue", NULL);
[captureMetadataOutput setMetadataObjectsDelegate:self queue:dispatchQueue];
// Delegate should implement captureOutput:didOutputMetadataObjects:fromConnection: to get callbacks on detected metadata.
[captureMetadataOutput setMetadataObjectTypes:[captureMetadataOutput availableMetadataObjectTypes]];
// Layer that will display what the camera is capturing.
self.captureLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[self.captureLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
gCameraPreviewView= [[[UIView alloc] initWithFrame:CGRectMake(gCamX1, gCamY1, gCamX2-gCamX1, gCamY2-gCamY1)] retain];
[self.captureLayer setFrame:gCameraPreviewView.layer.bounds];
[gCameraPreviewView.layer addSublayer:self.captureLayer];
UIViewController * lVC = [[[UIApplication sharedApplication] keyWindow] rootViewController];
[lVC.view addSubview:gCameraPreviewView];
You don't need to directly access rgb camera frames to make it as a texture because IOS supports a texture cache which is faster than you.
- (void) writeSampleBuffer:(CMSampleBufferRef)sampleBuffer ofType:(NSString *)mediaType pixel:(CVImageBufferRef)cameraFrame time:(CMTime)frameTime;
in the callback method, you can generate texture using those parameter and functions below
CVOpenGLESTextureCacheCreate(...)
CVOpenGLESTextureCacheCreateTextureFromImage(...)
In the end we converted from the CMSampleBuffer to the raw data (so we could upload to a GL texture) like this. It takes a little time but was fast enough for our purposes.
If there's any improvements, I'd be glad to know :)
shaun
- (void)captureOutput:(AVCaptureFileOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
if(!self.context)
{
self.context = [CIContext contextWithOptions:nil]; //only create this once
}
int xsize= CVPixelBufferGetWidth(imageBuffer);
int ysize= CVPixelBufferGetHeight(imageBuffer);
CGImageRef videoImage = [self.context createCGImage:ciImage fromRect:CGRectMake(0, 0, xsize, ysize)];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace)
{
return ;
}
size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
size_t bytesPerRow = xsize * bytesPerPixel;
size_t bufferLength = bytesPerRow * ysize;
uint32_t * tempbitmapData = (uint32_t *)malloc(bufferLength);
if(!tempbitmapData)
{
CGColorSpaceRelease(colorSpace);
return ;
}
CGContextRef cgcontext = CGBitmapContextCreate(tempbitmapData, xsize, ysize, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
if(!cgcontext)
{
free(tempbitmapData);
return;
}
CGColorSpaceRelease(colorSpace);
CGRect rect = CGRectMake(0, 0, xsize, ysize);
CGContextDrawImage(cgcontext, rect, image.CGImage);
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(cgcontext); // Get a pointer to the data
CGContextRelease(cgcontext);
[image release];
CallbackWithData((unsigned int *)bitmapData,xsize,ysize); //send data
free(bitmapData);
CGImageRelease(videoImage);
}
Related
I am making an app in flutter that uses the camera of the device to do filtering with opencv over current video preview.
I am using the camera plugin
With startImageStream I am obtaining the frames of the video
I am doing the filter in C++, so I use ffi to send the information to the filter. Here is the header of the function in c++
__attribute__((visibility("default"))) __attribute__((used))
uint8_t* process_image(int32_t width, int32_t height, uint8_t *bytes)
I have a native_opencv.dart as follows
import 'dart:ffi' as ffi;
import 'dart:io';
import 'package:ffi/ffi.dart';
// C function signatures
typedef _process_image_func = ffi.Pointer<ffi.Uint8> Function(ffi.Int32 width, ffi.Int32 height, ffi.Pointer<ffi.Uint8> bytes);
// Dart function signatures
typedef _ProcessImageFunc = ffi.Pointer<ffi.Uint8> Function(int width, int height, ffi.Pointer<ffi.Uint8> bytes);
// Getting a library that holds needed symbols
ffi.DynamicLibrary _lib = Platform.isAndroid
? ffi.DynamicLibrary.open('libnative_opencv.so')
: ffi.DynamicLibrary.process();
// Looking for the functions
final _ProcessImageFunc _processImage = _lib
.lookup<ffi.NativeFunction<_process_image_func>>('process_image')
.asFunction();
ffi.Pointer<ffi.Uint8> processImage(int width, int height, ffi.Pointer<ffi.Uint8> bytes)
{
return _processImage(width, height, bytes);
}
Here I am stuck. I need to return the frames of the video processed in the filter in C++ to the app and display it on the screen.
I thought I could use CameraController and feed it with an array of bytes, but I can't figure out how to do that (if possible).
This is what I have until now in main.dart for that part:
void _initializeCamera() async {
// Get list of cameras of the device
List<CameraDescription> cameras = await availableCameras();
// Create the CameraController
_camera = new CameraController(cameras[0], ResolutionPreset.veryHigh);
_camera.initialize().then((_) async{
// Start ImageStream
await _camera.startImageStream((CameraImage image) => _processCameraImage(image));
});
}
Future<void> _processCameraImage(CameraImage image) async
{
Pointer<Uint8> p = allocate(count: _savedImage.planes[0].bytes.length);
// Assign the planes data to the pointers of the image
Uint8List pointerList = p.asTypedList(_savedImage.planes[0].bytes.length);
pointerList.setRange(0, _savedImage.planes[0].bytes.length, _savedImage.planes[0].bytes);
// Get the pointer of the data returned from the function to a List
Pointer<Uint8> afterP = processImage(_savedImage.width, _savedImage.height, p);
List imgData = afterP.asTypedList((_savedImage.width * _savedImage.height));
// Generate image from the converted data
imglib.Image img = imglib.Image.fromBytes(_savedImage.height, _savedImage.width, imgData);
}
I don't know how to show the filtered frames of the preview video from the camera on the app screen.
Any help is appreciated, thank you.
So i have some original image.
I need to get and display some part of this image using specific mask. Mask is not a rectangle or a shape. It contains different polygons and shapes.
Are there any methods or tutorials how to implement that? Or from where to start to make this? Shall i write some small shader or compare appropriate pixels of the images or something like this?
Will be glad for any advises.
Thank you
You can put your image in a UIImageView and add a mask to that image view with something like
myImageView.layer.mask = myMaskLayer;
To draw your custom shapes, myMaskLayer should be an instance of your own subclass of CALayer that implements the drawInContext method. The method should draw the areas to be shown in the image with full alpha and the areas to be hidden with zero alpha (and can also use in between alphas if you like). Here's an example implementation of a CALayer subclass that, when used as a mask on the image view, will cause only an oval area in the top left corner of the image to be shown:
#interface MyMaskLayer : CALayer
#end
#implementation MyMaskLayer
- (void)drawInContext:(CGContextRef)ctx {
static CGColorSpaceRef rgbColorSpace;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
rgbColorSpace = CGColorSpaceCreateDeviceRGB();
});
const CGRect selfBounds = self.bounds;
CGContextSetFillColorSpace(ctx, rgbColorSpace);
CGContextSetFillColor(ctx, (CGFloat[4]){0.0, 0.0, 0.0, 0.0});
CGContextFillRect(ctx, selfBounds);
CGContextSetFillColor(ctx, (CGFloat[4]){0.0, 0.0, 0.0, 1.0});
CGContextFillEllipseInRect(ctx, selfBounds);
}
#end
To apply such a mask layer to an image view, you might use code like this
MyMaskLayer *maskLayer = [[MyMaskLayer alloc] init];
maskLayer.bounds = self.view.bounds;
[maskLayer setNeedsDisplay];
self.imageView.layer.mask = maskLayer;
Now, if you want to get pixel data from a rendering like this it's best to render into a CGContext backed by a bitmap where you control the pixel buffer. You can inspect the pixel data in your buffer after rendering. Here's an example where I create a bitmap context and render a layer and mask into that bitmap context. Since I supply my own buffer for the pixel data, I can then go poking around in the buffer to get RGBA values after rendering.
const CGRect imageViewBounds = self.imageView.bounds;
const size_t imageWidth = CGRectGetWidth(imageViewBounds);
const size_t imageHeight = CGRectGetHeight(imageViewBounds);
const size_t bytesPerPixel = 4;
// Allocate your own buffer for the bitmap data. You can pass NULL to
// CGBitmapContextCreate and then Quartz will allocate the memory for you, but
// you won't have a pointer to the resulting pixel data you want to inspect!
unsigned char *bitmapData = (unsigned char *)malloc(imageWidth * imageHeight * bytesPerPixel);
CGContextRef context = CGBitmapContextCreate(bitmapData, imageWidth, imageHeight, 8, bytesPerPixel * imageWidth, rgbColorSpace, kCGImageAlphaPremultipliedLast);
// Render the image into the bitmap context.
[self.imageView.layer renderInContext:context];
// Set the blend mode to destination in, pixels become "destination * source alpha"
CGContextSetBlendMode(context, kCGBlendModeDestinationIn);
// Render the mask into the bitmap context.
[self.imageView.layer.mask renderInContext:context];
// Whatever you like here, I just chose the middle of the image.
const size_t x = imageWidth / 2;
const size_t y = imageHeight / 2;
const size_t pixelIndex = y * imageWidth + x;
const unsigned char red = bitmapData[pixelIndex];
const unsigned char green = bitmapData[pixelIndex + 1];
const unsigned char blue = bitmapData[pixelIndex + 2];
const unsigned char alpha = bitmapData[pixelIndex + 3];
NSLog(#"rgba: {%u, %u, %u, %u}", red, green, blue, alpha);
i have a problem with cocos2d and glReadPixels because don't work correctly.
I found in web a code for pixel perfect collision and i modified for my app, but with the animation or more fast animation don't work.
This is the code:
-(BOOL) isCollisionBetweenSpriteA:(CCSprite*)spr1 spriteB:(CCSprite*)spr2 pixelPerfect:(BOOL)pp
{
BOOL isCollision = NO;
CGRect intersection = CGRectIntersection([spr1 boundingBox], [spr2 boundingBox]);
// Look for simple bounding box collision
if (!CGRectIsEmpty(intersection))
{
// If we're not checking for pixel perfect collisions, return true
if (!pp) {return YES;}
// Get intersection info
unsigned int x = intersection.origin.x;
unsigned int y = intersection.origin.y;
unsigned int w = intersection.size.width;
unsigned int h = intersection.size.height;
unsigned int numPixels = w * h;
//NSLog(#"\nintersection = (%u,%u,%u,%u), area = %u",x,y,w,h,numPixels);
// Draw into the RenderTexture
[_rt beginWithClear:0 g:0 b:0 a:0];
// Render both sprites: first one in RED and second one in GREEN
glColorMask(1, 0, 0, 1);
[spr1 visit];
glColorMask(0, 1, 0, 1);
[spr2 visit];
glColorMask(1, 1, 1, 1);
// Get color values of intersection area
ccColor4B *buffer = malloc( sizeof(ccColor4B) * numPixels );
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
/******* All this is for testing purposes *********/
// Draw the intersection rectangle in BLUE (testing purposes)
/**************************************************/
[_rt end];
// Read buffer
unsigned int step = 1;
for(unsigned int q=0; q<1; q+=step)
{
ccColor4B color = buffer[q];
if (color.r > 0 && color.g > 0)
{
isCollision = YES;
break;
}
}
// Free buffer memory
free(buffer);
}
return isCollision;
}
where is the problem?I tried but nothing.
Thank you very much.
regards.
If you are using iOS6, have a look at this post for a solution:
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) self.layer;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat,
nil];
The explanation is that iOS6 fixes some bugs in iOS Open GL implementation, so that the GL buffer is (correctly) cleared each time it is presented to the screen. Here what Apple writes about this:
Important: You must call glReadPixels before calling EAGLContext/-presentRenderbuffer: to get defined results unless you're using a retained back buffer.
The correct solution would be calling glReadPixels before the render buffer is presented to the screen. After that, it invalidated.
The solution above is just a workaround to make the image sort of "sticky".
Be aware that it can impact your app rendering performance. The point is that if you are using cocos2d, you cannot easily call glReadPixels before the the render buffer is presented.
Hope it helps.
I'm trying to create a 2D background for my ogre scene that renders the camera frames for the QCAR SDK. This is on an iPad with iOS 6.
At the moment I'm retrieving the pixel data like so in renderFrameQCAR:
const QCAR::Image *image = camFrame.getImage(1);
if(image) {
pixels = (unsigned char *)image->getPixels();
}
This returns pixels in the RGB888 format, then passing it to my ogre scene in the renderOgre() functions:
if(isUpdated)
scene.setCameraFrame(pixels);
scene.m_pRoot->renderOneFrame();
The setCameraFrame(pixels) function consists of:
void CarScene::setCameraFrame(const unsigned char *pixels)
{
HardwarePixelBufferSharedPtr pBuffer = m_pBackgroundTexture->getBuffer();
pBuffer->lock(HardwareBuffer::HBL_DISCARD);
const PixelBox& pBox = pBuffer->getCurrentLock();
PixelBox *tmp = new PixelBox(screenWidth, screenHeight, 0, PF_R8G8B8, &pixels);
pBuffer->blit(pBuffer, *tmp, pBox);
pBuffer->unlock();
delete tmp;
}
In this function I'm attempting to create a new PixelBox, copy the pixels into it and the copy that over the the pixelBuffer.
When I first create my Ogre3D scene, I set up the m_pBackgroundTexture & background rect2d like so:
void CarScene::createBackground()
{
m_pBackgroundTexture = TextureManager::getSingleton().createManual("DynamicTexture", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, m_pViewport->getActualWidth(), m_pViewport->getActualHeight(), 0, PF_R8G8B8, TU_DYNAMIC_WRITE_ONLY_DISCARDABLE);
m_pBackgroundMaterial = MaterialManager::getSingleton().create("Background", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->createTextureUnitState("DynamicTexture");
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setSceneBlending(SBT_TRANSPARENT_ALPHA);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthCheckEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthWriteEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setLightingEnabled(false);
m_pBackgroundRect = new Rectangle2D(true);
m_pBackgroundRect->setCorners(-1.0, 1.0, 1.0, -1.0);
m_pBackgroundRect->setMaterial("Background");
m_pBackgroundRect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);
AxisAlignedBox aabInf;
aabInf.setInfinite();
m_pBackgroundRect->setBoundingBox(aabInf);
SceneNode* node = m_pSceneManager->getRootSceneNode()->createChildSceneNode();
node->attachObject(m_pBackgroundRect);
}
After this all I get is a white background with no texture, and I have no idea why this is not displaying the output! My goal for this is just to have the camera rendering in the background so I can project my 3d model onto it.
Thanks,
Harry.
has anyone an idea how to convert a UIImage to cocos2d-x CCSprite.
My latest attempt was was:
1. Store the UIImage as png on the phone
2. Load the png as a CCSprite
[UIImagePNGRepresentation(photo) writeToFile:imagePath atomically:true];
CCSprite *sprite = CCSprite::spriteWithFile(imagePath);
But this crashed in CCObject retain function
void CCObject::retain(void)
{
CCAssert(m_uReference > 0, "reference count should greater than 0");
++m_uReference;
}
And I do not understand how Walzer Wangs suggestion works
http://cocos2d-x.org/boards/6/topics/3922
CCImage::initWithImageData(void* pData, int nDataLen, ...)
CCTexture2D::initWithImage(CCImage* uiImage);
CCSprite::initWithTexture(CCTexture2D* pTexture);
CCSprite* getCCSpriteFromUIImage(UIImage *photo) {
CCImage *imf =new CCImage();
NSData *imgData = UIImagePNGRepresentation(photo);
NSUInteger len = [imgData length];
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [imgData bytes], len);
imf->initWithImageData(byteData,imgData.length);
imf->autorelease();
CCTexture2D* pTexture = new CCTexture2D();
pTexture->initWithImage(imf);
pTexture->autorelease();
CCSprite *sprit = new CCSprite();
sprit->createWithTexture(pTexture);
sprit->autorelease();
DebugLog("size :%f :%f ",sprit->getContentSize().width , sprit->getContentSize().height);
return sprit;
}
I got the solution for my own Problem. You can't create a CCSprite before the CCDirector is initialized. There are missing some config settings and cocos releases the image right after it is instantiated.
Save uiimage to documents directory first
now get it using getWritablePath
Texture2D* newTexture2D = new Texture2D();
Image* JpgImage = new Image();
JpgImage->initWithImageFile("your image path.jpg");
newTexture2D->initWithData(JpgImage->getData(),JpgImage->getDataLen(),Texture2D::PixelFormat::RGB888,JpgImage->getWidth(),JpgImage->getHeight(),Size(JpgImage->getWidth(),JpgImage->getHeight()));