EasyAR access Camera Frames as OpenCV Mat - c++

I'm using EasyAR to develop an app on android using C++ & I'm trying to use opencv with it, what I'm trying to achieve is: get the easyAR frames that it got from the camera as Mat and do some processing using opencv then return the frames to view.
Why do all that? simply I'm only after the EasyAR camera frame crossplatform access (I think it's really fast, I just built the sample HelloAR)
in the Sample HelloAR, there is a line
auto frame = streamer->peek();
is there is a way to convert this to be used in openCV ?
is there is an alternative way to access camera frames from c++ in both IOS & Android (Min API 16)?
your help is appreciated, thank you.
here is the samples link, I'm using HelloAR
http://s3-us-west-2.amazonaws.com/easyar/sdk/EasyAR_SDK_2.0.0_Basic_Samples_Android_2017-05-29.tar.xz

Okay, I managed to get a solution for this
so simply frame (class Frame in EasyAR) contains a vector of images (probably different images for the same frame), accessing that vector returns an Image object with a method called data (a byte array) and that can be used to initialize a Mat in opencv
here is the code to clarify for anyone searching for the same
unsigned char* imageBuffer = static_cast<unsigned char*>(frame->images().at(0)->data());
int height = frame->images()[0]->height(); // height of the image
int width = frame->images()[0]->width(); // width of image
// Obtained Frame is YUV21 by default, so convert that to RGBA
cv::Mat _yuv(height+height/2, width, CV_8UC1, imageBuffer);
cv::cvtColor(_yuv, _yuv, CV_YUV2RGBA_NV21);

Related

Dicom Toolkit (DCMTK) - How to get Window Centre and Width

I am currently using DCMTK in C++. I am quite new to this toolkit but, as I understand it, I should be able to read the window centre and width for normalisation purposes.
I have a DicomImage DCM_image object with my Dicom data.
I read the values to an opencv Mat object. However, I now would like to normalise them.
The following shows how I am reading and transferring the Data to an opencv Mat.
DicomImage DCM_image("test.dcm");
uchar *pixelData = (uchar *)(DCM_image.getOutputData(8));
cv::Mat image(int(DCM_image.getHeight()), int(DCM_image.getWidth()), CV_8U, pixelData);
Any help is appreciated. Thanks
Reading window center and width is not difficult, however you need to use a different constructor and pass a DcmDataset to the image.
DcmFileFormat file;
file.loadFile("test.dcm");
DcmDataset* dataset = file.getDataset()
DicomImage image(dataset);
double windowCenter, windowWidth;
dataset->findAndGetFloat64(DcmTagKey(0x0010, 0x1050), windowCenter);
dataset->findAndGetFloat64(DcmTagKey(0x0010, 0x1051), windowWidth);
But actually I do not think it is a good idea to apply the windowing to the image upon loading. Windowing is something which should be adjustable by the user. The attributes Window Center and Window Width allow multiple values which can be applied to adjust the window to the grayscale range of interest ("VOI", Values of Interest).
If you really just want to create a windowed image, you can use your code to construct the image from the file contents and use one of the createXXXImage methods that the DicomImage provides.
HTH

OpenCV GPU SURF ROI error?

I am using VS2012 C++/CLI on a Win 8.1 64 bit machine with OpenCV 3.0.
I am trying to implement the GPU version of SURF.
When I do not specify an ROI, I have no problem. Thus, the following line gives no problem and detects keypoints in the full image:
surf(*dImages->d_gpuGrayFrame,cuda::GpuMat(),Images->SurfKeypoints);
However, efforts to specify an ROI cause a crash. For example, I specify the ROI by Top,Left and Bottom,Right coordinates (which I know work in non-GPU code). In the GPU code, the following causes a crash if the ROI is any smaller than the source image itself (no crash if the ROI is the same size as the source image).
int theight = (Images->LoadFrameClone.rows-Images->CropBottom) - Images->CropTop ;
int twidth = (Images->LoadFrameClone.cols-Images->CropRight)-Images->CropLeft ;
Rect tRect = Rect(Images->CropLeft,Images->CropTop,twidth,theight);
cuda::GpuMat tmask = cuda::GpuMat(*dImages->d_gpuGrayFrame,tRect);
surf(*dImages->d_gpuGrayFrame,tmask,Images->SurfKeypoints); // fails on this line
I know that tmask is non-zero in size and that the underlying image is correct. As far as I can tell, the only issue is specifying an ROI in the SURF GPU call. Any leads on why this may be happening?
Thanks
I experienced the same problem in OpenCV 3.1. Presumably the SURF algorithm doesn't work with images which have a step or stride which has been introduced by setting an ROI. I have not tried masking to see if this makes a difference.
A work around is to just copy the ROI to another contiguous GpuMat. Memory to memory copies are almost free on the GPU (my GTX780 does device to device memcopy at 142 GB/sec), which makes this hack a bit less odious.
GpuMat src; // filled with some image
GpuMat srcRoi = GpuMat (src, roi); // roi within src
GpuMat dst;
srcRoi.copyTo (dst);
surf(dst, GpuMat(), KeypointsGpu, DescriptorsGpu);

Taking a screenshot of a particular area

Looking for a way for taking a screenshot of a particular area on the screen in C++. (So not the whole screen) Then it should save it as .png .jpg whatever to use it with another function afterwards.
Also, I am going to use it, somehow, with openCV. Thought i'd mention that, maybe it's a helpful detail.
OpenCV cannot take screenshots from your computer directly. You will need a different framework/method to do this. #Ben is correct, this link would be worth investigating.
Once you have read this image in, you will need to store it into a cv:Mat so that you are able to perform OpenCV operations on it.
In order to crop an image in OpenCV the following code snippet would help.
CVMat * imagesource;
// Transform it into the C++ cv::Mat format
cv::Mat image(imagesource);
// Setup a rectangle to define your region of interest
cv::Rect myROI(10, 10, 100, 100);
// Crop the full image to that image contained by the rectangle myROI
// Note that this doesn't copy the data
cv::Mat croppedImage = image(myROI);

Is it possible to have a cv::Mat that contains pointers to scalars rather than scalars?

I am attempting to bridge between VTK (3D visualization library) and OpenCV (image processing library).
Currently, I am doing the following:
vtkWindowToImageFilter converts vtkRenderWindow (scene) into vtkImageData (pixels of render window).
I then have to copy each pixel of vtkImageData into a cv::Mat for processing and display with OpenCV.
This process needs to run in real time, so the redundant copy (scene pixels into ImageData into Mat) severely impacts performance. I would like to map directly from scene pixels to cv::Mat.
As the scene changes, I would like the cv::Mat to automatically reference the scene. Essentially, I would like a cv::Mat<uchar *>, rather than a cv::Mat<uchar>. Does that make sense? Or am I overcomplicating it?
vtkSmartPointer<vtkImageData> image = vtkSmartPointer<vtkImageData>::New();
int dims[3];
image->GetImageData()->GetDimensions(dims);
cv::Mat matImage(cv::Size(dims[0], dims[1]), CV_8UC3, image->GetImageData()->GetScalarPointer());`
I succeeded implementing vtkimagedata* directly into cv::Mat pointer array..
some functions such as
cv::flip or matImage -= cv::Scalar(255,0,0)
directly working on vtkimagedata.
but functions like cv::resize or cv::Canny doesn't work.

How to convert IplImage to a GLubyte array?

I have a function that reads a image and does some image processing on it using various openCV functions. Now, I want to display this image as texture using openGL. For this purpose, how to perform a conversion between a cv::Mat array to a Glubyte array?
It depends on the data type in your IplImage / cv::Mat.
If it is of type CV_8U, then the pixel format is already correct and you just need:
ensure that your OpenCV data has a correct color format with respect to your OpenGL context (OpenCV commomnly uses BGR color channel ordering instead of RGB)
if your original image has some orientation, then you need to flip it vertically using cvFlip() / cv::flip(). The rational behind is that OpenCV has the origine of the coordinate frame in the top left corner (y-axis going down) while OpenGL applications usually have the origin in the bottom (y-axis going up)
use the data pointer field of IplImage / cv::Mat to get access to the pixel data and load it into OpenGL.
If your image data is in another format than CV_8U, then you need to convert it to this pixel format first using the function cvConvertScale() / cv::Mat::convertTo().