audio synthesis with MoMu STK - c++

Has anybody successfully implemented an Instrument using MoMu STK on iOS? I am bit stacked with initialization of a stream for Instrument.
I am using tutorial code and looks like something missing
RtAudio dac;
// Figure out how many bytes in an StkFloat and setup the RtAudio stream.
RtAudio::StreamParameters parameters;
parameters.deviceId = dac.getDefaultOutputDevice();
parameters.nChannels = 1;
RtAudioFormat format = ( sizeof(StkFloat) == 8 ) ? RTAUDIO_FLOAT64 : RTAUDIO_FLOAT32;
unsigned int bufferFrames = RT_BUFFER_SIZE;
dac.openStream( & parameters, NULL, format, (unsigned int)Stk::sampleRate(), &bufferFrames, &tick, (void *)&data );
Error description says that output parameters for output device are invalid, but when I skip to assign device id then it's not working as well.
Any idea would be great.

RtAudio is only for desktop apps and there is no need to open stream when implementing on iOS.
example:
Header file:
#import "Simple.h"
// make struct to hold
struct TickData {
Simple *synth;
};
// Make instance of the struct in #interface=
TickData data;
Implementation file:
// init the synth:
data.synth = new Simple();
data.synth->keyOff();
// to trigger note on/off:
data.synth->noteOn(frequency, velocity);
data.synth->noteOff(velocity);
// audio callback method:
for (int i=0; i < FRAMESIZE; i++) {
buffer[i] = data.synth -> tick();
}

Yep, I have a couple of apps in the store with STK classes running on them. Bear in mind that the setup required to run STK on iOS is different from the one required to run it on your desktop.
Here's a tutorial on how to use STK classes inside an iOS app:
https://arielelkin.github.io/articles/mandolin

Related

How to publish Int16MultiArray from rosserial arduino

I am trying to publish an Int16MultiArray for the ros package mecanum_drive: https://github.com/dudasdavid/mecanum_drive
My issue is that I cant seem to publish the array from my arduino. (I am using Teensy 4.1)
#include <std_msgs/Int16MultiArray.h>
ros::NodeHandle nh;
std_msgs::Int16MultiArray wheel_ticks;
ros::Publisher wheel_ticks_pub("wheel_ticks", &wheel_ticks);
void setup() {
nh.getHardware()->setBaud(115200); //was 115200
nh.initNode(); // init ROS
nh.advertise(wheel_ticks_pub);
}
void loop() {
// put your main code here, to run repeatedly:
//I have tried the code below which uploads to the arduino, but rostopic then says that it dosnt contain any data
/*
short value[4] = {0,100,0,0};
wheel_ticks.data = value;
*/
//I also tryed the code below which uploads, but then the teensy looses its serial port (arduino port says"[no_device] Serial(Teensy4.1)":
/*
wheel_ticks.data[0] = 10;
wheel_ticks.data[1] = 5;
*/
//below gives this error: cannot convert '<brace-enclosed initializer list>' to 'std_msgs::Int16MultiArray::_data_type* {aka short int*}' in assignment
/*
wheel_ticks.data = {0,0,0,1};
*/
wheel_ticks_pub.publish(&wheel_ticks);
nh.spinOnce();
}
Everything I have tried has either not uploaded, uploaded but with serial being messed up, or with it uploading and rostopic echo saying it is empty.
Thanks for looking at this, I hope you can help!
A very specific limitation of rosserial is arrays have an extra field specifically for data length. This is needed since the data field is implemented as a pointer, thus having no real good way to get data length. The message type actually looks like this
class Int16MultiArray{
Header header;
int data_length;
int16_t * data;
};
So, all you have to do is set the data field before sending a message
#include <std_msgs/Int16MultiArray.h>
ros::NodeHandle nh;
std_msgs::Int16MultiArray wheel_ticks;
ros::Publisher wheel_ticks_pub("wheel_ticks", &wheel_ticks);
void setup() {
nh.getHardware()->setBaud(115200); //was 115200
nh.initNode(); // init ROS
nh.advertise(wheel_ticks_pub);
}
void loop() {
short value[4] = {0,100,0,0};
wheel_ticks.data = value;
wheel_ticks.data_length = 4;
wheel_ticks_pub.publish(&wheel_ticks);
nh.spinOnce();
}

Read text multiple times with Elementary access on native Tizen TV

Hello fellow programmers,
I am trying to use the Text To Speech functionality provided by the Elementary access library (from Enlightenment) in a native app for tizen TV.
So far I have been able to read text, but only once: when I call the API multiple times, only the first call is rendered to audio.
I have investigated the sources of elementary access, but can't really spot the problem.
Here is a sample of my app:
#include <app.h>
#include <Elementary.h>
#include <unistd.h>
#include <string>
using namespace std;
const char APP_PKG[] = "org.tizen.tts";
/// Struct to store application information and passed at start time to the efl framework
struct _appdata
{
const char *name;
Evas_Object *win;
};
static bool
_create(void *data)
{
elm_config_access_set(EINA_TRUE);
return true;
}
static bool
_control(app_control_h app_control, void *data)
{
for (int i = 1; i <= 2; i++) {
string text = to_string(i) + ". Read me please.";
elm_access_say(text.c_str());
// sleep(10);
}
return true;
}
int
main(int argc, char *argv[])
{
_appdata ad = {0,};
ad.name = APP_PKG;
ui_app_lifecycle_callback_s lifecycle_callback = {0,};
lifecycle_callback.create = _create;
lifecycle_callback.app_control = _control;
return ui_app_main(argc, argv, &lifecycle_callback, &ad);
}
I have tried using elm_access_force_say, also moving elm_config_access_set(EINA_TRUE) inside the loop, but everytime the sentence is only said once.
Here in the source is some code called by elm_access_say. It seems that the api makes a call to espeak executable, strangely I can't find any espeak executable on the device.
Tizen provides an API for using the TTS engine in native apps, but only for mobile and watches (at least in the documentation).
If someone ever tried to use the TTS engine on native TV, or have more experience with the Elementary access library, and would like to share some knowledge, I would be really thankful.
If you are using Tizen 4.0 or above and you want to read text multiple times using accessibility framework, please use the elm_atspi_bridge_utils_say. Below code snippet demonstrates how to read consecutive numbers.
statc void reade_n_times(int n) {
char buf[32];
for (int i=1;i<=n;++i){
snprintf(bug,sizesizeof(buf), "%d", i);
elm_atspi_bridge_utils_say(buf, EINA_FALSE, say_cb, NULL);
}
}
Full specification of elm_atspi_bridge_utils_say can be found here:
https://developer.tizen.org/dev-guide/tizen-iot-headed/4.0/group__Elm__Atspi__Bridge.html#gafde6945c1451cb8752c67f2aa871d12d "
Use this page for 4.0 Wearable API Reference. API Reference of tizen-iot-headed is not up-to-date.
https://docs.tizen.org/application/native/api/mobile/4.0/group__Elm__Atspi__Bridge.html

How to access Audio data from JUCE Demo Audio Plugin Host?

I am working on a project which requires me to record audio data as .wav files(of 1 second each) from a MIDI Synth plugin loaded in the JUCE Demo Audio Plugin host. Basically, I need to create a dataset automatically (corresponding to different parameter configurations) from the MIDI Synth.
Will I have to send MIDI Note On/Off messages to generate audio data? Or is there a better way of getting audio data?
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
Is this the function which will solve my needs? If yes, how would I store the data? If not, could someone please guide me to the right function/solution.
Thank you.
I'm not exactly sure what you're asking, so I'm going to guess:
You need to programmatically trigger some MIDI notes in your synth, then write all the audio to a .wav file, right?
Assuming you already know JUCE, it would be fairly trivial to make an app that opens your plugin, sends MIDI, and records audio, but it's probably just easier to tweak the AudioPluginHost project.
Lets break it into a few simple steps (first open the AudioPluginHost project):
Programmatically send MIDI
Look at GraphEditorPanel.h, specifically the class GraphDocumentComponent. It has a private member variable: MidiKeyboardState keyState;. This collects incoming MIDI Messages and then inserts them into the incoming Audio & MIDI buffer that is sent to the plugin.
You can simply call keyState.noteOn (midiChannel, midiNoteNumber, velocity) and keyState.noteOff (midiChannel, midiNoteNumber, velocity) to trigger a note on.
Record Audio Output
This is a fairly straightforward thing to do in JUCE — you should start by looking at the JUCE Demos. The following example records output audio in the background, but there are plenty of other ways to do it:
class AudioRecorder : public AudioIODeviceCallback
{
public:
AudioRecorder (AudioThumbnail& thumbnailToUpdate)
: thumbnail (thumbnailToUpdate)
{
backgroundThread.startThread();
}
~AudioRecorder()
{
stop();
}
//==============================================================================
void startRecording (const File& file)
{
stop();
if (sampleRate > 0)
{
// Create an OutputStream to write to our destination file...
file.deleteFile();
ScopedPointer<FileOutputStream> fileStream (file.createOutputStream());
if (fileStream.get() != nullptr)
{
// Now create a WAV writer object that writes to our output stream...
WavAudioFormat wavFormat;
auto* writer = wavFormat.createWriterFor (fileStream.get(), sampleRate, 1, 16, {}, 0);
if (writer != nullptr)
{
fileStream.release(); // (passes responsibility for deleting the stream to the writer object that is now using it)
// Now we'll create one of these helper objects which will act as a FIFO buffer, and will
// write the data to disk on our background thread.
threadedWriter.reset (new AudioFormatWriter::ThreadedWriter (writer, backgroundThread, 32768));
// Reset our recording thumbnail
thumbnail.reset (writer->getNumChannels(), writer->getSampleRate());
nextSampleNum = 0;
// And now, swap over our active writer pointer so that the audio callback will start using it..
const ScopedLock sl (writerLock);
activeWriter = threadedWriter.get();
}
}
}
}
void stop()
{
// First, clear this pointer to stop the audio callback from using our writer object..
{
const ScopedLock sl (writerLock);
activeWriter = nullptr;
}
// Now we can delete the writer object. It's done in this order because the deletion could
// take a little time while remaining data gets flushed to disk, so it's best to avoid blocking
// the audio callback while this happens.
threadedWriter.reset();
}
bool isRecording() const
{
return activeWriter != nullptr;
}
//==============================================================================
void audioDeviceAboutToStart (AudioIODevice* device) override
{
sampleRate = device->getCurrentSampleRate();
}
void audioDeviceStopped() override
{
sampleRate = 0;
}
void audioDeviceIOCallback (const float** inputChannelData, int numInputChannels,
float** outputChannelData, int numOutputChannels,
int numSamples) override
{
const ScopedLock sl (writerLock);
if (activeWriter != nullptr && numInputChannels >= thumbnail.getNumChannels())
{
activeWriter->write (inputChannelData, numSamples);
// Create an AudioBuffer to wrap our incoming data, note that this does no allocations or copies, it simply references our input data
AudioBuffer<float> buffer (const_cast<float**> (inputChannelData), thumbnail.getNumChannels(), numSamples);
thumbnail.addBlock (nextSampleNum, buffer, 0, numSamples);
nextSampleNum += numSamples;
}
// We need to clear the output buffers, in case they're full of junk..
for (int i = 0; i < numOutputChannels; ++i)
if (outputChannelData[i] != nullptr)
FloatVectorOperations::clear (outputChannelData[i], numSamples);
}
private:
AudioThumbnail& thumbnail;
TimeSliceThread backgroundThread { "Audio Recorder Thread" }; // the thread that will write our audio data to disk
ScopedPointer<AudioFormatWriter::ThreadedWriter> threadedWriter; // the FIFO used to buffer the incoming data
double sampleRate = 0.0;
int64 nextSampleNum = 0;
CriticalSection writerLock;
AudioFormatWriter::ThreadedWriter* volatile activeWriter = nullptr;
};
Note that the actual audio callbacks that contain the audio data from your plugin are happening inside the AudioProcessorGraph inside FilterGraph. There is an audio callback happening many times a second where the raw audio data is passed in. It would probably be very messy to change that inside AudioPluginHost unless you know what you are doing — it would probably be simpler to use something like the above example or create your own app that has its own audio flow.
The function you asked about:
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
is irrelevant. Once you're already in the audio callback, this would give you the audio being sent to a bus of your plugin (aka if your synth had a side chain). What you want to do instead is take the audio coming out of the callback and pass it to an AudioFormatWriter, or preferably an AudioFormatWriter::ThreadedWriter so that the actual writing happens on a different thread.
If you're not at all familiar with C++ or JUCE, Max/MSP or Pure Data might be easier for you to quickly whip something up.

QtGstreamer camerabin2 usage

I'm working on olimex a13 board with just eglfs i.e, no windowing system. Because of this Qt Multimedia stuff video and camera aren't working as Qt uses Gstreamer which in turn needs X. So I'm using QtGstreamer library which is here.
I've followed the examples and created a media player which is working as expected. Now, I want to do a camera and using camerabin2 which is from bad plugins.
This is my code:
//init QGst
QGst::init(&argc, &argv);
//create video surface
QGst::Quick::VideoSurface* videoSurface = new QGst::Quick::VideoSurface(&engine);
CameraPlayer player;
player.setVideoSink(videoSurface->videoSink());
//cameraplayer.cpp
void open()
{
if (!m_pipeline) {
m_pipeline = QGst::ElementFactory::make("camerabin").dynamicCast<QGst::Pipeline>();
if (m_pipeline) {
m_pipeline->setProperty("video-sink", m_videoSink);
//watch the bus for messages
QGst::BusPtr bus = m_pipeline->bus();
bus->addSignalWatch();
QGlib::connect(bus, "message", this, &CameraPlayer::onBusMessage);
//QGlib::connect(bus, "image-done", this, &CameraPlayer::onImageDone);
} else {
qCritical() << "Failed to create the pipeline";
}
}
}
//-----------------------------------
void CameraPlayer::setVideoSink(const QGst::ElementPtr & sink)
{
m_videoSink = sink;
}
//-----------------------------------
void CameraPlayer::start()
{
m_pipeline->setState(QGst::StateReady);
m_pipeline->setState(QGst::StatePlaying);
}
I then call cameraPlayer.start() which isn't working i.e, no video. Am I missing something here? Has anyone used QtGstreamer to stream webcam? Thanks in advance.
I realised some plugins (multifilesink) were missing. Started my Qt application with --gst-debug-level=4 argument and gstreamer then reported about missing plugins.

Periodically send data to MATLAB from mexFile

I'm working right now on a Data Acquisition Tool completely written
in MATLAB. It was the wish of my colleagues that i write this thing in MATLAB
so that they can expand and modify it.
The Software needs to grab a picture from two connected USB cameras.
The API for these cameras is written in C++ and is documented -> Here.
Here is the Problem:
When i write a mex file which grabs a picture it includes the
initialization and configuration-loading of the cameras which
takes a long time. When i want to grab the pictures
this way it takes MATLAB over 1 second to perform the task.
The cameras are able, once initialized, to record and send 100 fps.
The minimum frame rate i need is 10 fps.
I need to be able to send every recorded picture back
to MATLAB. Because the recording session for which the
Acquisition Tool is needed takes approx 12 hours and we
need a Live Screen with some slight PostProcessing.
Is it possible to generate a loop within the mex File which
sends data to MATLAB, then waits for a return signal from MATLAB
and continues ?
This way i could initialize the cameras and send periodically
the images to MATLAB.
I'am a Beginner in C++ and it is quite possible that i
don't understand a fundamental concept why this
is not possible.
Thank you for any advice or sources where i could look.
Please find below the Code which initializes the Cameras
using the Pylon API provided by Basler.
// Based on the Grab_MultipleCameras.cpp Routine from Basler
/*
This routine grabs one frame from 2 cameras connected
via two USB3 ports. It directs the Output to MATLAB.
*/
// Include files to use the PYLON API.
#include <pylon/PylonIncludes.h>
#include <pylon/usb/PylonUsbIncludes.h>
#include <pylon/usb/BaslerUsbInstantCamera.h>
#include <pylon/PylonUtilityIncludes.h>
// Include Files for MEX Generation
#include <matrix.h>
#include <mex.h>
// Namespace for using pylon objects.
using namespace Pylon;
// We are lazy and use Basler USB namespace
using namespace Basler_UsbCameraParams;
// Standard namespace
using namespace std;
// Define Variables Globally to be remembered between each call
// Filenames for CamConfig
const String_t filenames[] = { "NodeMapCam1.pfs","NodeMapCam2.pfs" };
// Limits the amount of cameras used for grabbing.
static const size_t camerasToUse = 2;
// Create an array of instant cameras for the found devices and
// avoid exceeding a maximum number of devices.
CBaslerUsbInstantCameraArray cameras(camerasToUse);
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
// Automagically call PylonInitialize and PylonTerminate to ensure the pylon runtime system.
// is initialized during the lifetime of this object
PylonAutoInitTerm autoInitTerm;
try
{
// Get the transport layer factory
CTlFactory& tlFactory = CTlFactory::GetInstance();
// Get all attached devices and exit application if no device or USB Port is found.
DeviceInfoList_t devices;
ITransportLayer *pTL = dynamic_cast<ITransportLayer*>(tlFactory.CreateTl(BaslerUsbDeviceClass));
if (pTL == NULL)
{
throw RUNTIME_EXCEPTION("No USB transport layer available.");
}
if (pTL->EnumerateDevices(devices) == 0)
{
throw RUNTIME_EXCEPTION("No camera present.");
}
// Create and attach all Pylon Devices. Load Configuration
for (size_t i = 0; i < cameras.GetSize(); ++i)
{
cameras[i].Attach(tlFactory.CreateDevice(devices[i]));
}
// Open all cameras.
cameras.Open();
// Load Configuration and execute Trigger
for (size_t i = 0; i < cameras.GetSize(); ++i)
{
CFeaturePersistence::Load(filenames[i], &cameras[i].GetNodeMap());
}
if (cameras[0].IsOpen() && cameras[1].IsOpen())
{
mexPrintf("\nCameras are fired up and configuration is applied\n");
// HERE I WOULD LIKE TO GRAB PICTURES AND SEND THEM
// PERIODICALLY TO MATLAB.
}
}
catch (GenICam::GenericException &e)
{
// Error handling
mexPrintf("\nAn exception occured:\n");
mexPrintf(e.GetDescription());
}
return;
}
You could loop and send images back to MATLAB periodically, but how do you want it to be in the workspace (multiple 2D images, a huge 3D/4D array, cell, etc.)? I think the solution you are looking for is a stateful MEX file, which can be launched with an 'init' or 'new' command, and then called again repeatedly with 'capture' commands for an already initialized camera.
There is an example of how to do this in my GitHub. Start with class_wrapper_template.cpp and modify it for your commands (new, capture, delete, etc.). Here is a rough and untested example of how the core of it might look (also mirrored on Gist.GitHub):
// pylon_mex_camera_interface.cpp
#include "mex.h"
#include <vector>
#include <map>
#include <algorithm>
#include <memory>
#include <string>
#include <sstream>
//////////////////////// BEGIN Step 1: Configuration ////////////////////////
// Include your class declarations (and PYLON API).
#include <pylon/PylonIncludes.h>
#include <pylon/usb/PylonUsbIncludes.h>
#include <pylon/usb/BaslerUsbInstantCamera.h>
#include <pylon/PylonUtilityIncludes.h>
// Define class_type for your class
typedef CBaslerUsbInstantCameraArray class_type;
// List actions
enum class Action
{
// create/destroy instance - REQUIRED
New,
Delete,
// user-specified class functionality
Capture
};
// Map string (first input argument to mexFunction) to an Action
const std::map<std::string, Action> actionTypeMap =
{
{ "new", Action::New },
{ "delete", Action::Delete },
{ "capture", Action::Capture }
}; // if no initializer list available, put declaration and inserts into mexFunction
using namespace Pylon;
using namespace Basler_UsbCameraParams;
const String_t filenames[] = { "NodeMapCam1.pfs","NodeMapCam2.pfs" };
static const size_t camerasToUse = 2;
///////////////////////// END Step 1: Configuration /////////////////////////
// boilerplate until Step 2 below
typedef unsigned int handle_type;
typedef std::pair<handle_type, std::shared_ptr<class_type>> indPtrPair_type; // or boost::shared_ptr
typedef std::map<indPtrPair_type::first_type, indPtrPair_type::second_type> instanceMap_type;
typedef indPtrPair_type::second_type instPtr_t;
// getHandle pulls the integer handle out of prhs[1]
handle_type getHandle(int nrhs, const mxArray *prhs[]);
// checkHandle gets the position in the instance table
instanceMap_type::const_iterator checkHandle(const instanceMap_type&, handle_type);
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) {
// static storage duration object for table mapping handles to instances
static instanceMap_type instanceTab;
if (nrhs < 1 || !mxIsChar(prhs[0]))
mexErrMsgTxt("First input must be an action string ('new', 'delete', or a method name).");
char *actionCstr = mxArrayToString(prhs[0]); // convert char16_t to char
std::string actionStr(actionCstr); mxFree(actionCstr);
for (auto & c : actionStr) c = ::tolower(c); // remove this for case sensitivity
if (actionTypeMap.count(actionStr) == 0)
mexErrMsgTxt(("Unrecognized action (not in actionTypeMap): " + actionStr).c_str());
// If action is not 'new' or 'delete' try to locate an existing instance based on input handle
instPtr_t instance;
if (actionTypeMap.at(actionStr) != Action::New && actionTypeMap.at(actionStr) != Action::Delete) {
handle_type h = getHandle(nrhs, prhs);
instanceMap_type::const_iterator instIt = checkHandle(instanceTab, h);
instance = instIt->second;
}
//////// Step 2: customize each action in the switch in mexFuction ////////
switch (actionTypeMap.at(actionStr))
{
case Action::New:
{
if (nrhs > 1 && mxGetNumberOfElements(prhs[1]) != 1)
mexErrMsgTxt("Second argument (optional) must be a scalar, N.");
handle_type newHandle = instanceTab.size() ? (instanceTab.rbegin())->first + 1 : 1;
// Store a new CBaslerUsbInstantCameraArray in the instance map
std::pair<instanceMap_type::iterator, bool> insResult =
instanceTab.insert(indPtrPair_type(newHandle, std::make_shared<class_type>(camerasToUse)));
if (!insResult.second) // sanity check
mexPrintf("Oh, bad news. Tried to add an existing handle."); // shouldn't ever happen
else
mexLock(); // add to the lock count
// return the handle
plhs[0] = mxCreateDoubleScalar(insResult.first->first); // == newHandle
// Get all attached devices and exit application if no device or USB Port is found.
CTlFactory& tlFactory = CTlFactory::GetInstance();
// Check if cameras are attached
ITransportLayer *pTL = dynamic_cast<ITransportLayer*>(tlFactory.CreateTl(BaslerUsbDeviceClass));
// todo: some checking here... (pTL == NULL || pTL->EnumerateDevices(devices) == 0)
// Create and attach all Pylon Devices. Load Configuration
CBaslerUsbInstantCameraArray &cameras = *instance;
DeviceInfoList_t devices;
for (size_t i = 0; i < cameras.GetSize(); ++i) {
cameras[i].Attach(tlFactory.CreateDevice(devices[i]));
}
// Open all cameras.
cameras.Open();
// Load Configuration and execute Trigger
for (size_t i = 0; i < cameras.GetSize(); ++i) {
CFeaturePersistence::Load(filenames[i], &cameras[i].GetNodeMap());
}
if (cameras[0].IsOpen() && cameras[1].IsOpen()) {
mexPrintf("\nCameras are fired up and configuration is applied\n");
break;
}
case Action::Delete:
{
instanceMap_type::const_iterator instIt = checkHandle(instanceTab, getHandle(nrhs, prhs));
(instIt->second).close(); // may be unnecessary if d'tor does it
instanceTab.erase(instIt);
mexUnlock();
plhs[0] = mxCreateLogicalScalar(instanceTab.empty()); // just info
break;
}
case Action::Capture:
{
CBaslerUsbInstantCameraArray &cameras = *instance; // alias for the instance
// TODO: create output array and capture a frame(s) into it
plhs[0] = mxCreateNumericArray(...);
pixel_type* data = (pixel_type*) mxGetData(plhs[0]);
cameras[0].GrabOne(...,data,...);
// also for cameras[1]?
}
}
default:
mexErrMsgTxt(("Unhandled action: " + actionStr).c_str());
break;
}
//////////////////////////////// DONE! ////////////////////////////////
}
// See github for getHandle and checkHandle
The idea is that you would call it once to init:
>> h = pylon_mex_camera_interface('new');
Then you would call it in a MATLAB loop to get frames:
>> newFrame{i} = pylon_mex_camera_interface('capture', h);
When you are done:
>> pylon_mex_camera_interface('delete', h)
You should wrap this with a MATLAB class. Derive from cppclass.m to do this easily. For a derived class example see pqheap.m.
Instead of sending data to MATLAB you should make your mex file store camera related settings so that it does not initialize in each call. One way to do this is to use two modes of calls for your mex file. An 'init' call and a call to get data. Pseudo code in MATLAB would be
cameraDataPtr = myMex('init');
while ~done
data = myMex('data', cameraDataPtr);
end
In your mex file, you should store the camera settings in a memory which is persistent across calls. One way to do this is using 'new' in c++. You should return this memory pointer as an int64 type to MATLAB which is shown as cameraDataPtr in the above code. When 'data' is asked for you should take cameraDataPtr as input and cast back to your camera settings. Say in C++, you have a CameraSettings object which stores all data related to camera then, a rough pseudo code in c++ would be
if prhs[0] == 'init' { // Use mxArray api to check this
cameraDataPtr = new CameraSettings; // Initialize and setup camera
plhs[0] = createMxArray(cameraDataPtr); // Use mxArray API to create int64 from pointer
return;
} else {
// Need data
cameraDataPtr = getCameraDataPtr(plhs[1]);
// Use cameraDataPtr after checking validity to get next frame
}
This works because mex files stay in memory once loaded until you clear them. You should use mexAtExit function to release camera resource when the mex file is unloaded from memory. You could also use 'static' to store your camera settings in c++ if this is the only place your mex file is going to be used. This will avoid writing some mxArray handling code for returning your c++ pointer.
If you wrap the call to this mex file inside a MATLAB object you can control the initialization and run-time process more easily and present a better API to your users.
I ran into the same problem and wanted to use a Basler camera with the mex API in Matlab. The contributions and hints here definitely helped me to come up with some ideas. However, there is a much simpler solution than the previously proposed one. It's not necessary to return the camera pointer to Matlab back, because objects will stay in memory across multiple mex calls. Here is a working code which I programmed with the new mex C++ API. Have fun with it.
Here is the C++ File which can be compiled with mex:
#include <opencv2/core/core.hpp>
#include <opencv2/opencv.hpp>
#include <pylon/PylonIncludes.h>
#include <pylon/usb/PylonUsbIncludes.h>
#include <pylon/usb/BaslerUsbInstantCamera.h>
#include <pylon/PylonUtilityIncludes.h>
#include "mex.hpp"
#include "mexAdapter.hpp"
#include <chrono>
#include <string>
using namespace matlab::data;
using namespace std;
using namespace Pylon;
using namespace Basler_UsbCameraParams;
using namespace GenApi;
using namespace cv;
using matlab::mex::ArgumentList;
class MexFunction : public matlab::mex::Function{
matlab::data::ArrayFactory factory;
double Number = 0;
std::shared_ptr<matlab::engine::MATLABEngine> matlabPtr = getEngine();
std::ostringstream stream;
Pylon::CInstantCamera* camera;
INodeMap* nodemap;
double systemTime;
double cameraTime;
public:
MexFunction(){}
void operator()(ArgumentList outputs, ArgumentList inputs) {
try {
Number = Number + 1;
if(!inputs.empty()){
matlab::data::CharArray InputKey = inputs[0];
stream << "You called: " << InputKey.toAscii() << std::endl;
displayOnMATLAB(stream);
// If "Init" is the input value
if(InputKey.toUTF16() == factory.createCharArray("Init").toUTF16()){
// Important: Has to be closed
PylonInitialize();
IPylonDevice* pDevice = CTlFactory::GetInstance().CreateFirstDevice();
camera = new CInstantCamera(pDevice);
nodemap = &camera->GetNodeMap();
camera->Open();
camera->RegisterConfiguration( new CSoftwareTriggerConfiguration, RegistrationMode_ReplaceAll, Cleanup_Delete);
CharArray DeviceInfo = factory.createCharArray(camera -> GetDeviceInfo().GetModelName().c_str());
stream << "Message: Used Camera is " << DeviceInfo.toAscii() << std::endl;
displayOnMATLAB(stream);
}
// If "Grab" is called
if(InputKey.toUTF16() == factory.createCharArray("Grab").toUTF16()){
static const uint32_t c_countOfImagesToGrab = 1;
camera -> StartGrabbing(c_countOfImagesToGrab);
CGrabResultPtr ptrGrabResult;
Mat openCvImage;
CImageFormatConverter formatConverter;
CPylonImage pylonImage;
while (camera -> IsGrabbing()) {
camera -> RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
if (ptrGrabResult->GrabSucceeded()) {
formatConverter.Convert(pylonImage, ptrGrabResult);
Mat openCvImage = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC1,(uint8_t *)pylonImage.GetBuffer(), Mat::AUTO_STEP);
const size_t rows = openCvImage.rows;
const size_t cols = openCvImage.cols;
matlab::data::TypedArray<uint8_t> Yp = factory.createArray<uint8_t>({ rows, cols });
for(int i = 0 ;i < openCvImage.rows; ++i){
for(int j = 0; j < openCvImage.cols; ++j){
Yp[i][j] = openCvImage.at<uint8_t>(i,j);
}
}
outputs[0] = Yp;
}
}
}
// if "Delete"
if(InputKey.toUTF16() == factory.createCharArray("Delete").toUTF16()){
camera->Close();
PylonTerminate();
stream << "Camera instance removed" << std::endl;
displayOnMATLAB(stream);
Number = 0;
//mexUnlock();
}
}
// ----------------------------------------------------------------
stream << "Anzahl der Aufrufe bisher: " << Number << std::endl;
displayOnMATLAB(stream);
// ----------------------------------------------------------------
}
catch (const GenericException & ex) {
matlabPtr->feval(u"disp", 0, std::vector<Array>({factory.createCharArray(ex.GetDescription()) }));
}
}
void displayOnMATLAB(std::ostringstream& stream) {
// Pass stream content to MATLAB fprintf function
matlabPtr->feval(u"fprintf", 0,
std::vector<Array>({ factory.createScalar(stream.str()) }));
// Clear stream buffer
stream.str("");
}
};
This mex File can be called from Matlab with the following commands:
% Initializes the camera. The camera parameters can also be loaded here.
NameOfMexFile('Init');
% Camera image is captured and sent back to Matlab
[Image] = NameOfMexFile('Grab');
% The camera connection has to be closed.
NameOfMexFile('Delete');
Optimization and improvements of this code are welcome. There are still problems with the efficiency of the code. An image acquisition takes about 0.6 seconds. This is mainly due to the cast from a cv::mat image to a TypedArray which is necessary to return it back to Matlab. See this line in the two loops: Yp[i][j] = openCvImage.at<uint8_t>(i,j);
I have not figured out how to make this more efficient yet. Furthermore, the code cannot be used to return multiple images back to Matlab.
Maybe someone has an idea or hint to make the conversion from cv::mat to a Matlab array type faster. I already mentioned the problem in another post. See here: How to Return a Opencv image cv::mat to Matlab with the Mex C++ API