How to pass to image buffer to wasm - c++

I'm trying to pass array buffer js to wasm OpenCV but some times It throws an exception or some time blank array when using imdecode function.
Simple HTML:
<input type='file' id accept='image/*' onchange='openFile(event)'>
Javascript code
var openFile = function (e) {
const fileReader = new FileReader();
fileReader.onload = (event) => {
const uint8Arr = new Uint8Array(event.target.result);
passToWasm(uint8Arr);
};
fileReader.readAsArrayBuffer(e.target.files[0]);
};
function passToWasm(uint8ArrData) {
// copying the uint8ArrData to the heap
const numBytes = uint8ArrData.length * uint8ArrData.BYTES_PER_ELEMENT;
const dataPtr = Module._malloc(numBytes);
const dataOnHeap = new Uint8Array(Module.HEAPU8.buffer, dataPtr, numBytes);
dataOnHeap.set(uint8ArrData);
// calling the Wasm function
const res = Module._image_input(dataOnHeap.byteOffset, uint8ArrData.length);
}
C++ code:
extern "C"
{
int image_input(uint8_t* buffer, size_t nSize) //query image input
{
Mat raw_data = cv::Mat(1, nSize, CV_8UC1, buffer);
img_object = cv::imdecode(raw_data, cv::IMREAD_UNCHANGED);
cout << img_object << endl;
return 1
}
}
Please help me I have spent many days to solve this problem.
I'm trying the same with the help of the following question.
How to pass image frames camera to a function in wasm (C++)?

malloc() and HEAP8.set() can be used to achieve that. Surma's article describes how to do this indetail. Another example is this decode() function, which sets values in Wasm from a JavaScript ArrayBuffer.

Related

How can I import Array<Mat> to JNI from Kotlin?

I am new to coding JNI,
So my problem was when I debugging my Android Studio, it gives me this "error: call to implicitly-deleted copy constructor of 'cv::Mat'"
And I don't know why. I guess it's because of how I try to change from 'jlong' and 'jlongArray' to 'Mat'. My native-lib.cpp was:
JNIEXPORT jlong JNICALL
Java_com_android_example_panoramacamera_fragments_CameraFragment_imagesPass(JNIEnv *env,
jobject thiz, jlongArray image_in_,
jlong image_out_) {
// TODO: implement imagesPass()
Stitcher::Mode mode = Stitcher::PANORAMA;
Mat *image_in = (Mat*) image_in_, *image_out = (Mat*) image_out_;
// Create a Stitcher class object with mode panoroma
Ptr<Stitcher> stitcher = Stitcher::create(mode, false);
// Command to stitch all the images present in the image array
Stitcher::Status status = stitcher->stitch(*image_in, *image_out);
if(status == Stitcher::OK){
return (jlong) image_out;
}
}
And my kotlin-sript:
private external fun imagesPass(imageIn: LongArray, imageOut: Long): Long
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == pickImageCode && resultCode == RESULT_OK) {
if (data != null) {
if (data.clipData != null) {
val count = data.clipData!!.itemCount
for (i in 0 until count) {
val imageUri = data.clipData!!.getItemAt(i).uri
val imageStream: InputStream? = context?.contentResolver?.openInputStream(imageUri)
val bitmap = BitmapFactory.decodeStream(imageStream)
val mat = Mat()
Utils.bitmapToMat(bitmap, mat)
imagesMat[i] = mat
}
}
else {
val imageUri = data.data
val imageStream: InputStream? = imageUri?.let { context?.contentResolver?.openInputStream(it) }
val bitmap = BitmapFactory.decodeStream(imageStream)
val mat = Mat()
Utils.bitmapToMat(bitmap, mat)
imagesMat[0] = mat
}
}
for(i in imagesMat.indices){
longArray[i] = imagesMat[i].nativeObj
}
long = imagesPass(longArray, imageStitch.nativeObj)
imageStitch = Mat(long)
}
super.onActivityResult(requestCode, resultCode, data)
}
and as you can see, I have tried to import Mat but since jni.h is so limited in its language, I have to convert my Mat to Long so I can use both Array and Mat.
But then my "opencv2/core/mat.inl.hpp" start to show error:
inline Mat _InputArray::getMat(int i) const
{
if( kind() == MAT && i < 0 )
return *(const Mat*)obj; //This line gets error
return getMat_(i);
}
So my question is how can I convert from jlongArray to InputArray? or How can I import Array to JNI from Kotlin?
Thank you very much.
Doing something like
Mat *image_in = (Mat*) image_in_
is incorrect code. For all practical purposes, always treat all JNI objects as opaque objects, making no assumption as to how they store the actual underlying data and instead use the JNI APIs to manipulate these objects, including retrieving the actual data from them. A jlongArray is not equivalent to something like jlong array[] = {1, 2, 3}.
From what I understand, you need access to the underlying native elements from the Java jlongArray. There are 2 possible options:
Get the backing native elements using GetLongArrayElements(). This provides a native array of jlongs which is valid until ReleaseLongArrayElements() is called.
Create a copy of a range of elements from the jlongArray using GetLongArrayRegion() that provides a copy of the buffer into a jlong* buffer. This buffer's life is not tied to the actual jlongArray. If the elements from this buffer need to be copied back to the original jlongArray then SetLongArrayRegion() can be used.
Once you have access to the native buffers, then they can be used in C++ code as usual like an array of longs.
An example for a solution with the 1st approach would look something like:
Java_com_android_example_panoramacamera_fragments_CameraFragment_imagesPass
(JNIEnv * env, jobject thiz, jlongArray imageIn, jlong ImageOut) {
jsize len = env->GetArrayLength(imageIn);
jlong * nativeImageList = env->GetLongArrayElements(imageIn, NULL);
//Now one can do something like
//Mat* image_in = reinterpret_cast<Mat*>(nativeImageList);
//This should give the native version of images
for(jsize idx = 0; idx < len; idx++) {
std::printf("%zd ", nativeImageList[idx]);
}
std::printf("\n");
//Once done with the array, release it back to JVM
env->ReleaseLongArrayElements(imageIn, nativeImageList, 0);
return ImageOut;
}
In the above piece of code, the nativeImageList is an array of jlong which is equivalent to what was passed in from the Java/Kt layer into longArray. Each of the elements in this nativeImageList will be the same as what was stored with the line
longArray[i] = imagesMat[i].nativeObj
Hence nativeImageList[0] shall be the value of imagesMat[0].nativeObj and so on. This is obviously a handle to the underlying image and can be just used as
Mat* image_in = reinterpret_cast<Mat*>(nativeImageList);
Note the difference from
Mat* image_in = (Mat*) image_in_
Here, the native elements are retrieved and then cast into Mat*, not directly from the jlongArray object.
Reference

Specific filepath to store Screen Record using CGDisplayStream in OSX

I have been working on a c++ command line tool to record screen. After some searching I have come up with this following code. Looks like screen is being recorded when I compile and run the code. I am looking for functions where I can provide the specific filepath where the screen record is to be stored. Also I would like to append the timestamp along with filename. If anybody has better approach or method to this problem please suggest here. Any leads are appreciated. Thanks
#include <ApplicationServices/ApplicationServices.h>
int main(int argc, const char * argv[]) {
// insert code here...
CGRect mainMonitor = CGDisplayBounds(CGMainDisplayID());
CGFloat monitorHeight = CGRectGetHeight(mainMonitor);
CGFloat monitorWidth = CGRectGetWidth(mainMonitor);
const void *keys[1] = { kCGDisplayStreamSourceRect };
const void *values[1] = { CGRectCreateDictionaryRepresentation(CGRectMake(0, 0, 100, 100)) };
CFDictionaryRef properties = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
CGDisplayStreamRef stream = CGDisplayStreamCreate(CGMainDisplayID(), monitorWidth, monitorHeight, '420f' , properties, ^(CGDisplayStreamFrameStatus status, uint64_t displayTime, IOSurfaceRef frameSurface, CGDisplayStreamUpdateRef updateRef){});
CGDirectDisplayID displayID = CGMainDisplayID();
CGImageRef image_create = CGDisplayCreateImage(displayID);
CFRunLoopSourceRef runLoop = CGDisplayStreamGetRunLoopSource(stream);
// CFRunLoopAddSource(<#CFRunLoopRef rl#>, runLoop, <#CFRunLoopMode mode#>);
CGError err = CGDisplayStreamStart(stream);
if (err == CGDisplayNoErr) {
std::cout<<"WORKING"<<std::endl;
sleep(5);
} else {
std::cout<<"Error: "<<err<<std::endl;
}
//std::cout << "Hello, World!\n";
return 0;
}
You should do that in the callback which you provide in CGDisplayStreamCreate. You can access the pixels via IOSurfaceGetBaseAddress (see other IOSurface functions). If you don't want to do the pixel twiddling yourself, you could create a CVPixelBuffer with CVPixelBufferCreateWithBytes from the IOSurface and then create a CIImage with [CIImage imageWithCVImageBuffer] and save that to file as seen here.

WinRT C++ (Win10) Accessing bytes from SoftwareBitmap / BitmapBuffer

To process my previewFrames of my camera in OpenCV, I need access to the raw Pixel data / bytes. So, there is the new SoftwareBitmap, which should exactly provide this.
There is an example for c#, but in visual c++ I can't get the IMemoryBufferByteAccess (see remarks) Interface working.
Code with Exceptions:
// Capture the preview frame
return create_task(_mediaCapture->GetPreviewFrameAsync(videoFrame))
.then([this](VideoFrame^ currentFrame)
{
// Collect the resulting frame
auto previewFrame = currentFrame->SoftwareBitmap;
auto buffer = previewFrame->LockBuffer(Windows::Graphics::Imaging::BitmapBufferAccessMode::ReadWrite);
auto reference = buffer->CreateReference();
// Get a pointer to the pixel buffer
byte* pData = nullptr;
UINT capacity = 0;
// Obtain ByteAccess
ComPtr<IUnknown> inspectable = reinterpret_cast<IUnknown*>(buffer);
// Query the IBufferByteAccess interface.
Microsoft::WRL::ComPtr<IMemoryBufferByteAccess> bufferByteAccess;
ThrowIfFailed(inspectable.As(&bufferByteAccess)); // ERROR ---> Throws HRESULT = E_NOINTERFACE
// Retrieve the buffer data.
ThrowIfFailed(bufferByteAccess->GetBuffer(_Out_ &pData, _Out_ &capacity)); // ERROR ---> Throws HRESULT = E_NOINTERFACE, because bufferByteAccess is null
I tried this too:
HRESULT hr = ((IMemoryBufferByteAccess*)reference)->GetBuffer(&pData, &capacity);
HRESULT is ok, but I can't access pData -> Access Violation reading Memory.
Thanks for your help.
You should use reference instead of buffer in reinterpret_cast.
#include "pch.h"
#include <wrl\wrappers\corewrappers.h>
#include <wrl\client.h>
MIDL_INTERFACE("5b0d3235-4dba-4d44-865e-8f1d0e4fd04d")
IMemoryBufferByteAccess : IUnknown
{
virtual HRESULT STDMETHODCALLTYPE GetBuffer(
BYTE **value,
UINT32 *capacity
);
};
auto previewFrame = currentFrame->SoftwareBitmap;
auto buffer = previewFrame->LockBuffer(BitmapBufferAccessMode::ReadWrite);
auto reference = buffer->CreateReference();
ComPtr<IMemoryBufferByteAccess> bufferByteAccess;
HRESULT result = reinterpret_cast<IInspectable*>(reference)->QueryInterface(IID_PPV_ARGS(&bufferByteAccess));
if (result == S_OK)
{
WriteLine("Get interface successfully");
BYTE* data = nullptr;
UINT32 capacity = 0;
result = bufferByteAccess->GetBuffer(&data, &capacity);
if (result == S_OK)
{
WriteLine("get data access successfully, capacity: " + capacity);
}
}
Based on answer from #jeffrey-chen and example from #kennykerr, I've assembled a tiny bit cleaner solution:
#include <wrl/client.h>
// other includes, as required by your project
MIDL_INTERFACE("5b0d3235-4dba-4d44-865e-8f1d0e4fd04d")
IMemoryBufferByteAccess : ::IUnknown
{
virtual HRESULT __stdcall GetBuffer(BYTE **value, UINT32 *capacity) = 0;
};
// your code:
auto previewFrame = currentFrame->SoftwareBitmap;
auto buffer = previewFrame->LockBuffer(BitmapBufferAccessMode::ReadWrite);
auto bufferByteAccess= buffer->CreateReference().as<IMemoryBufferByteAccess>();
WriteLine("Get interface successfully"); // otherwise - exception is thrown
BYTE* data = nullptr;
UINT32 capacity = 0;
winrt::check_hresult(bufferByteAccess->GetBuffer(&data, &capacity));
WriteLine("get data access successfully, capacity: " + capacity);
I'm currently accessing the raw unsigned char* data from each frame I obtain on a MediaFrameReader::FrameArrived event without using WRL and COM...
Here it is how:
void MainPage::OnFrameArrived(MediaFrameReader ^reader, MediaFrameArrivedEventArgs ^args)
{
MediaFrameReference ^mfr = reader->TryAcquireLatestFrame();
VideoMediaFrame ^vmf = mfr->VideoMediaFrame;
VideoFrame ^vf = vmf->GetVideoFrame();
SoftwareBitmap ^sb = vf->SoftwareBitmap;
Buffer ^buff = ref new Buffer(sb->PixelHeight * sb->PixelWidth * 2);
sb->CopyToBuffer(buff);
DataReader ^dataReader = DataReader::FromBuffer(buffer);
Platform::Array<unsigned char, 1> ^arr = ref new Platform::Array<unsigned char, 1>(buffer->Length);
dataReader->ReadBytes(arr);
// here arr->Data is a pointer to the raw pixel data
}
NOTE: The MediaCapture object needs to be configured with MediaCaptureMemoryPreference::Cpu in order to have a valid SoftwareBitmap
Hope the above helps someone

Pass Node.js Buffer to C++ addon

test.js
buf = new Buffer(100);
for (var i = 0; i < 100; i++) buf[i] = i
addon.myFync(buf);
addon.cpp
Handle<Value> set(const Arguments& args) {
char *buf = SOMETHING(args[0]);
return Undefined();
}
How to get the pointer to a data of the buffer inside the C++ function?
What should I write in place of SOMETHING(args[0])?
I have node_buffer.h opened in my editor, but I cannot figure out.
Node version = v0.10.29
You can do:
char* buf = node::Buffer::Data(args[0]);
to directly access the bytes of a Buffer.
According to the node.js node binding documentation the 'arg[0]' value argument can be accessed as:
String::AsciiValue v(args[0]->ToString());

Libjpeg write image to memory data

I would like to save image into memory (vector) using libjpeg library.
I found there funcitons:
init_destination
empty_output_buffer
term_destination
My question is how to do it safely and properly in parallel programs ? My function may be executed from different threads.
I want to do it in c++ and Visual Studio 2010.
Other libraries with callback functionality always have additional function parameter to store some additional data.
I don't see any way to add any additional parameters e.g. pointer to my local instance of vector.
Edit:
The nice solution of mmy question is here: http://www.christian-etter.de/?cat=48
The nice solution is described here: http://www.christian-etter.de/?cat=48
typedef struct _jpeg_destination_mem_mgr
{
jpeg_destination_mgr mgr;
std::vector<unsigned char> data;
} jpeg_destination_mem_mgr;
Initialization:
static void mem_init_destination( j_compress_ptr cinfo )
{
jpeg_destination_mem_mgr* dst = (jpeg_destination_mem_mgr*)cinfo->dest;
dst->data.resize( JPEG_MEM_DST_MGR_BUFFER_SIZE );
cinfo->dest->next_output_byte = dst->data.data();
cinfo->dest->free_in_buffer = dst->data.size();
}
When we finished then we need to resize buffer to actual size:
static void mem_term_destination( j_compress_ptr cinfo )
{
jpeg_destination_mem_mgr* dst = (jpeg_destination_mem_mgr*)cinfo->dest;
dst->data.resize( dst->data.size() - cinfo->dest->free_in_buffer );
}
When the buffer size is too small then we need to increase it:
static boolean mem_empty_output_buffer( j_compress_ptr cinfo )
{
jpeg_destination_mem_mgr* dst = (jpeg_destination_mem_mgr*)cinfo->dest;
size_t oldsize = dst->data.size();
dst->data.resize( oldsize + JPEG_MEM_DST_MGR_BUFFER_SIZE );
cinfo->dest->next_output_byte = dst->data.data() + oldsize;
cinfo->dest->free_in_buffer = JPEG_MEM_DST_MGR_BUFFER_SIZE;
return true;
}
Callbacks configuration:
static void jpeg_mem_dest( j_compress_ptr cinfo, jpeg_destination_mem_mgr * dst )
{
cinfo->dest = (jpeg_destination_mgr*)dst;
cinfo->dest->init_destination = mem_init_destination;
cinfo->dest->term_destination = mem_term_destination;
cinfo->dest->empty_output_buffer = mem_empty_output_buffer;
}
And sample usage:
jpeg_destination_mem_mgr dst_mem;
jpeg_compress_struct_wrapper cinfo;
j_compress_ptr pcinfo = cinfo;
jpeg_mem_dest( cinfo, &dst_mem);