I'm working right now on a Data Acquisition Tool completely written
in MATLAB. It was the wish of my colleagues that i write this thing in MATLAB
so that they can expand and modify it.
The Software needs to grab a picture from two connected USB cameras.
The API for these cameras is written in C++ and is documented -> Here.
Here is the Problem:
When i write a mex file which grabs a picture it includes the
initialization and configuration-loading of the cameras which
takes a long time. When i want to grab the pictures
this way it takes MATLAB over 1 second to perform the task.
The cameras are able, once initialized, to record and send 100 fps.
The minimum frame rate i need is 10 fps.
I need to be able to send every recorded picture back
to MATLAB. Because the recording session for which the
Acquisition Tool is needed takes approx 12 hours and we
need a Live Screen with some slight PostProcessing.
Is it possible to generate a loop within the mex File which
sends data to MATLAB, then waits for a return signal from MATLAB
and continues ?
This way i could initialize the cameras and send periodically
the images to MATLAB.
I'am a Beginner in C++ and it is quite possible that i
don't understand a fundamental concept why this
is not possible.
Thank you for any advice or sources where i could look.
Please find below the Code which initializes the Cameras
using the Pylon API provided by Basler.
// Based on the Grab_MultipleCameras.cpp Routine from Basler
/*
This routine grabs one frame from 2 cameras connected
via two USB3 ports. It directs the Output to MATLAB.
*/
// Include files to use the PYLON API.
#include <pylon/PylonIncludes.h>
#include <pylon/usb/PylonUsbIncludes.h>
#include <pylon/usb/BaslerUsbInstantCamera.h>
#include <pylon/PylonUtilityIncludes.h>
// Include Files for MEX Generation
#include <matrix.h>
#include <mex.h>
// Namespace for using pylon objects.
using namespace Pylon;
// We are lazy and use Basler USB namespace
using namespace Basler_UsbCameraParams;
// Standard namespace
using namespace std;
// Define Variables Globally to be remembered between each call
// Filenames for CamConfig
const String_t filenames[] = { "NodeMapCam1.pfs","NodeMapCam2.pfs" };
// Limits the amount of cameras used for grabbing.
static const size_t camerasToUse = 2;
// Create an array of instant cameras for the found devices and
// avoid exceeding a maximum number of devices.
CBaslerUsbInstantCameraArray cameras(camerasToUse);
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
// Automagically call PylonInitialize and PylonTerminate to ensure the pylon runtime system.
// is initialized during the lifetime of this object
PylonAutoInitTerm autoInitTerm;
try
{
// Get the transport layer factory
CTlFactory& tlFactory = CTlFactory::GetInstance();
// Get all attached devices and exit application if no device or USB Port is found.
DeviceInfoList_t devices;
ITransportLayer *pTL = dynamic_cast<ITransportLayer*>(tlFactory.CreateTl(BaslerUsbDeviceClass));
if (pTL == NULL)
{
throw RUNTIME_EXCEPTION("No USB transport layer available.");
}
if (pTL->EnumerateDevices(devices) == 0)
{
throw RUNTIME_EXCEPTION("No camera present.");
}
// Create and attach all Pylon Devices. Load Configuration
for (size_t i = 0; i < cameras.GetSize(); ++i)
{
cameras[i].Attach(tlFactory.CreateDevice(devices[i]));
}
// Open all cameras.
cameras.Open();
// Load Configuration and execute Trigger
for (size_t i = 0; i < cameras.GetSize(); ++i)
{
CFeaturePersistence::Load(filenames[i], &cameras[i].GetNodeMap());
}
if (cameras[0].IsOpen() && cameras[1].IsOpen())
{
mexPrintf("\nCameras are fired up and configuration is applied\n");
// HERE I WOULD LIKE TO GRAB PICTURES AND SEND THEM
// PERIODICALLY TO MATLAB.
}
}
catch (GenICam::GenericException &e)
{
// Error handling
mexPrintf("\nAn exception occured:\n");
mexPrintf(e.GetDescription());
}
return;
}
You could loop and send images back to MATLAB periodically, but how do you want it to be in the workspace (multiple 2D images, a huge 3D/4D array, cell, etc.)? I think the solution you are looking for is a stateful MEX file, which can be launched with an 'init' or 'new' command, and then called again repeatedly with 'capture' commands for an already initialized camera.
There is an example of how to do this in my GitHub. Start with class_wrapper_template.cpp and modify it for your commands (new, capture, delete, etc.). Here is a rough and untested example of how the core of it might look (also mirrored on Gist.GitHub):
// pylon_mex_camera_interface.cpp
#include "mex.h"
#include <vector>
#include <map>
#include <algorithm>
#include <memory>
#include <string>
#include <sstream>
//////////////////////// BEGIN Step 1: Configuration ////////////////////////
// Include your class declarations (and PYLON API).
#include <pylon/PylonIncludes.h>
#include <pylon/usb/PylonUsbIncludes.h>
#include <pylon/usb/BaslerUsbInstantCamera.h>
#include <pylon/PylonUtilityIncludes.h>
// Define class_type for your class
typedef CBaslerUsbInstantCameraArray class_type;
// List actions
enum class Action
{
// create/destroy instance - REQUIRED
New,
Delete,
// user-specified class functionality
Capture
};
// Map string (first input argument to mexFunction) to an Action
const std::map<std::string, Action> actionTypeMap =
{
{ "new", Action::New },
{ "delete", Action::Delete },
{ "capture", Action::Capture }
}; // if no initializer list available, put declaration and inserts into mexFunction
using namespace Pylon;
using namespace Basler_UsbCameraParams;
const String_t filenames[] = { "NodeMapCam1.pfs","NodeMapCam2.pfs" };
static const size_t camerasToUse = 2;
///////////////////////// END Step 1: Configuration /////////////////////////
// boilerplate until Step 2 below
typedef unsigned int handle_type;
typedef std::pair<handle_type, std::shared_ptr<class_type>> indPtrPair_type; // or boost::shared_ptr
typedef std::map<indPtrPair_type::first_type, indPtrPair_type::second_type> instanceMap_type;
typedef indPtrPair_type::second_type instPtr_t;
// getHandle pulls the integer handle out of prhs[1]
handle_type getHandle(int nrhs, const mxArray *prhs[]);
// checkHandle gets the position in the instance table
instanceMap_type::const_iterator checkHandle(const instanceMap_type&, handle_type);
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) {
// static storage duration object for table mapping handles to instances
static instanceMap_type instanceTab;
if (nrhs < 1 || !mxIsChar(prhs[0]))
mexErrMsgTxt("First input must be an action string ('new', 'delete', or a method name).");
char *actionCstr = mxArrayToString(prhs[0]); // convert char16_t to char
std::string actionStr(actionCstr); mxFree(actionCstr);
for (auto & c : actionStr) c = ::tolower(c); // remove this for case sensitivity
if (actionTypeMap.count(actionStr) == 0)
mexErrMsgTxt(("Unrecognized action (not in actionTypeMap): " + actionStr).c_str());
// If action is not 'new' or 'delete' try to locate an existing instance based on input handle
instPtr_t instance;
if (actionTypeMap.at(actionStr) != Action::New && actionTypeMap.at(actionStr) != Action::Delete) {
handle_type h = getHandle(nrhs, prhs);
instanceMap_type::const_iterator instIt = checkHandle(instanceTab, h);
instance = instIt->second;
}
//////// Step 2: customize each action in the switch in mexFuction ////////
switch (actionTypeMap.at(actionStr))
{
case Action::New:
{
if (nrhs > 1 && mxGetNumberOfElements(prhs[1]) != 1)
mexErrMsgTxt("Second argument (optional) must be a scalar, N.");
handle_type newHandle = instanceTab.size() ? (instanceTab.rbegin())->first + 1 : 1;
// Store a new CBaslerUsbInstantCameraArray in the instance map
std::pair<instanceMap_type::iterator, bool> insResult =
instanceTab.insert(indPtrPair_type(newHandle, std::make_shared<class_type>(camerasToUse)));
if (!insResult.second) // sanity check
mexPrintf("Oh, bad news. Tried to add an existing handle."); // shouldn't ever happen
else
mexLock(); // add to the lock count
// return the handle
plhs[0] = mxCreateDoubleScalar(insResult.first->first); // == newHandle
// Get all attached devices and exit application if no device or USB Port is found.
CTlFactory& tlFactory = CTlFactory::GetInstance();
// Check if cameras are attached
ITransportLayer *pTL = dynamic_cast<ITransportLayer*>(tlFactory.CreateTl(BaslerUsbDeviceClass));
// todo: some checking here... (pTL == NULL || pTL->EnumerateDevices(devices) == 0)
// Create and attach all Pylon Devices. Load Configuration
CBaslerUsbInstantCameraArray &cameras = *instance;
DeviceInfoList_t devices;
for (size_t i = 0; i < cameras.GetSize(); ++i) {
cameras[i].Attach(tlFactory.CreateDevice(devices[i]));
}
// Open all cameras.
cameras.Open();
// Load Configuration and execute Trigger
for (size_t i = 0; i < cameras.GetSize(); ++i) {
CFeaturePersistence::Load(filenames[i], &cameras[i].GetNodeMap());
}
if (cameras[0].IsOpen() && cameras[1].IsOpen()) {
mexPrintf("\nCameras are fired up and configuration is applied\n");
break;
}
case Action::Delete:
{
instanceMap_type::const_iterator instIt = checkHandle(instanceTab, getHandle(nrhs, prhs));
(instIt->second).close(); // may be unnecessary if d'tor does it
instanceTab.erase(instIt);
mexUnlock();
plhs[0] = mxCreateLogicalScalar(instanceTab.empty()); // just info
break;
}
case Action::Capture:
{
CBaslerUsbInstantCameraArray &cameras = *instance; // alias for the instance
// TODO: create output array and capture a frame(s) into it
plhs[0] = mxCreateNumericArray(...);
pixel_type* data = (pixel_type*) mxGetData(plhs[0]);
cameras[0].GrabOne(...,data,...);
// also for cameras[1]?
}
}
default:
mexErrMsgTxt(("Unhandled action: " + actionStr).c_str());
break;
}
//////////////////////////////// DONE! ////////////////////////////////
}
// See github for getHandle and checkHandle
The idea is that you would call it once to init:
>> h = pylon_mex_camera_interface('new');
Then you would call it in a MATLAB loop to get frames:
>> newFrame{i} = pylon_mex_camera_interface('capture', h);
When you are done:
>> pylon_mex_camera_interface('delete', h)
You should wrap this with a MATLAB class. Derive from cppclass.m to do this easily. For a derived class example see pqheap.m.
Instead of sending data to MATLAB you should make your mex file store camera related settings so that it does not initialize in each call. One way to do this is to use two modes of calls for your mex file. An 'init' call and a call to get data. Pseudo code in MATLAB would be
cameraDataPtr = myMex('init');
while ~done
data = myMex('data', cameraDataPtr);
end
In your mex file, you should store the camera settings in a memory which is persistent across calls. One way to do this is using 'new' in c++. You should return this memory pointer as an int64 type to MATLAB which is shown as cameraDataPtr in the above code. When 'data' is asked for you should take cameraDataPtr as input and cast back to your camera settings. Say in C++, you have a CameraSettings object which stores all data related to camera then, a rough pseudo code in c++ would be
if prhs[0] == 'init' { // Use mxArray api to check this
cameraDataPtr = new CameraSettings; // Initialize and setup camera
plhs[0] = createMxArray(cameraDataPtr); // Use mxArray API to create int64 from pointer
return;
} else {
// Need data
cameraDataPtr = getCameraDataPtr(plhs[1]);
// Use cameraDataPtr after checking validity to get next frame
}
This works because mex files stay in memory once loaded until you clear them. You should use mexAtExit function to release camera resource when the mex file is unloaded from memory. You could also use 'static' to store your camera settings in c++ if this is the only place your mex file is going to be used. This will avoid writing some mxArray handling code for returning your c++ pointer.
If you wrap the call to this mex file inside a MATLAB object you can control the initialization and run-time process more easily and present a better API to your users.
I ran into the same problem and wanted to use a Basler camera with the mex API in Matlab. The contributions and hints here definitely helped me to come up with some ideas. However, there is a much simpler solution than the previously proposed one. It's not necessary to return the camera pointer to Matlab back, because objects will stay in memory across multiple mex calls. Here is a working code which I programmed with the new mex C++ API. Have fun with it.
Here is the C++ File which can be compiled with mex:
#include <opencv2/core/core.hpp>
#include <opencv2/opencv.hpp>
#include <pylon/PylonIncludes.h>
#include <pylon/usb/PylonUsbIncludes.h>
#include <pylon/usb/BaslerUsbInstantCamera.h>
#include <pylon/PylonUtilityIncludes.h>
#include "mex.hpp"
#include "mexAdapter.hpp"
#include <chrono>
#include <string>
using namespace matlab::data;
using namespace std;
using namespace Pylon;
using namespace Basler_UsbCameraParams;
using namespace GenApi;
using namespace cv;
using matlab::mex::ArgumentList;
class MexFunction : public matlab::mex::Function{
matlab::data::ArrayFactory factory;
double Number = 0;
std::shared_ptr<matlab::engine::MATLABEngine> matlabPtr = getEngine();
std::ostringstream stream;
Pylon::CInstantCamera* camera;
INodeMap* nodemap;
double systemTime;
double cameraTime;
public:
MexFunction(){}
void operator()(ArgumentList outputs, ArgumentList inputs) {
try {
Number = Number + 1;
if(!inputs.empty()){
matlab::data::CharArray InputKey = inputs[0];
stream << "You called: " << InputKey.toAscii() << std::endl;
displayOnMATLAB(stream);
// If "Init" is the input value
if(InputKey.toUTF16() == factory.createCharArray("Init").toUTF16()){
// Important: Has to be closed
PylonInitialize();
IPylonDevice* pDevice = CTlFactory::GetInstance().CreateFirstDevice();
camera = new CInstantCamera(pDevice);
nodemap = &camera->GetNodeMap();
camera->Open();
camera->RegisterConfiguration( new CSoftwareTriggerConfiguration, RegistrationMode_ReplaceAll, Cleanup_Delete);
CharArray DeviceInfo = factory.createCharArray(camera -> GetDeviceInfo().GetModelName().c_str());
stream << "Message: Used Camera is " << DeviceInfo.toAscii() << std::endl;
displayOnMATLAB(stream);
}
// If "Grab" is called
if(InputKey.toUTF16() == factory.createCharArray("Grab").toUTF16()){
static const uint32_t c_countOfImagesToGrab = 1;
camera -> StartGrabbing(c_countOfImagesToGrab);
CGrabResultPtr ptrGrabResult;
Mat openCvImage;
CImageFormatConverter formatConverter;
CPylonImage pylonImage;
while (camera -> IsGrabbing()) {
camera -> RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
if (ptrGrabResult->GrabSucceeded()) {
formatConverter.Convert(pylonImage, ptrGrabResult);
Mat openCvImage = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC1,(uint8_t *)pylonImage.GetBuffer(), Mat::AUTO_STEP);
const size_t rows = openCvImage.rows;
const size_t cols = openCvImage.cols;
matlab::data::TypedArray<uint8_t> Yp = factory.createArray<uint8_t>({ rows, cols });
for(int i = 0 ;i < openCvImage.rows; ++i){
for(int j = 0; j < openCvImage.cols; ++j){
Yp[i][j] = openCvImage.at<uint8_t>(i,j);
}
}
outputs[0] = Yp;
}
}
}
// if "Delete"
if(InputKey.toUTF16() == factory.createCharArray("Delete").toUTF16()){
camera->Close();
PylonTerminate();
stream << "Camera instance removed" << std::endl;
displayOnMATLAB(stream);
Number = 0;
//mexUnlock();
}
}
// ----------------------------------------------------------------
stream << "Anzahl der Aufrufe bisher: " << Number << std::endl;
displayOnMATLAB(stream);
// ----------------------------------------------------------------
}
catch (const GenericException & ex) {
matlabPtr->feval(u"disp", 0, std::vector<Array>({factory.createCharArray(ex.GetDescription()) }));
}
}
void displayOnMATLAB(std::ostringstream& stream) {
// Pass stream content to MATLAB fprintf function
matlabPtr->feval(u"fprintf", 0,
std::vector<Array>({ factory.createScalar(stream.str()) }));
// Clear stream buffer
stream.str("");
}
};
This mex File can be called from Matlab with the following commands:
% Initializes the camera. The camera parameters can also be loaded here.
NameOfMexFile('Init');
% Camera image is captured and sent back to Matlab
[Image] = NameOfMexFile('Grab');
% The camera connection has to be closed.
NameOfMexFile('Delete');
Optimization and improvements of this code are welcome. There are still problems with the efficiency of the code. An image acquisition takes about 0.6 seconds. This is mainly due to the cast from a cv::mat image to a TypedArray which is necessary to return it back to Matlab. See this line in the two loops: Yp[i][j] = openCvImage.at<uint8_t>(i,j);
I have not figured out how to make this more efficient yet. Furthermore, the code cannot be used to return multiple images back to Matlab.
Maybe someone has an idea or hint to make the conversion from cv::mat to a Matlab array type faster. I already mentioned the problem in another post. See here: How to Return a Opencv image cv::mat to Matlab with the Mex C++ API
Related
From a source I am getting stream data which size will not be known before the final processing, but the minimum is 10 GB. I have to send this large amount of data using gRPC.
Need to mention here, this large amount data will be passed through the gRPC while the processing of the streaming is done. In this step, I have thought to store all the value in a vector.
Regarding sending large amount of data I have tried to get idea and found:
This where it is mentioned not to pass large data using gRPC. Here, mentioned to use any other message protocol where I have limitation to use something else rather than gRPC(at least till today).
From this post I have tried to know how chunk message can be sent but I am not sure is it related to my problem or not.
First post where I have found a blog to stream data using go language.
This one the presentation using python language of this post. But it is also incomplete.
gRPC example could be a good start bt cannot decode due to lack of C++ knowledge
From there, a huge Update I have done in the question. But the main theme of the question is not changed
What I have done so far and some points about my project. The github repo is available here.
A Unary rpc is present in the project
I know that my new Bi directional rpc will take some time. I want that the Unary rpc will not wait for the completion of the Bi directional rpc. Right now I am thinking in a synchronous way where Unary rpc is waiting to pass it's status for the streaming one completion.
I am avoiding the unnecessary lines in C++ code. But giving whole proto files
big_data.proto
syntax = "proto3";
package demo_grpc;
message Large_Data {
repeated int32 large_data_collection = 1 [packed=true];
int32 data_chunk_number = 2;
}
addressbook.proto
syntax = "proto3";
package demo_grpc;
import "myproto/big_data.proto";
message S_Response {
string name = 1;
string street = 2;
string zip = 3;
string city = 4;
string country = 5;
int32 double_init_val = 6;
}
message C_Request {
uint32 choose_area = 1;
string name = 2;
int32 init_val = 3;
}
service AddressBook {
rpc GetAddress(C_Request) returns (S_Response) {}
rpc Stream_Chunk_Service(stream Large_Data) returns (stream Large_Data) {}
}
client.cpp
#include <big_data.pb.h>
#include <addressbook.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <grpcpp/create_channel.h>
#include <iostream>
#include <numeric>
using namespace std;
// This function prompts the user to set value for the required area
void Client_Request(demo_grpc::C_Request &request_)
{
// do processing for unary rpc. Intentionally avoided here
}
// According to Client Request this function display the value of protobuf message
void Server_Response(demo_grpc::C_Request &request_, const demo_grpc::S_Response &response_)
{
// do processing for unary rpc. Intentionally avoided here
}
// following function make large vector and then chunk to send via stream from client to server
void Stream_Data_Chunk_Request(demo_grpc::Large_Data &request_,
demo_grpc::Large_Data &response_,
uint64_t preferred_chunk_size_in_kibyte)
{
// A dummy vector which in real case will be the large data set's container
std::vector<int32_t> large_vector;
// irerate it now for 1024*10 times
for(int64_t i = 0; i < 1024 * 10; i++)
{
large_vector.push_back(1);
}
uint64_t preferred_chunk_size_in_kibyte_holds_integer_num = 0; // 1 chunk how many intger will contain that num will come here
// total chunk number will be updated here
uint32_t total_chunk = total_chunk_counter(large_vector.size(), preferred_chunk_size_in_kibyte, preferred_chunk_size_in_kibyte_holds_integer_num);
// A temp counter to trace the index of the large_vector
int32_t temp_count = 0;
// loop will start if the total num of chunk is greater than 0. After each iteration total_chunk will be decremented
while(total_chunk > 0)
{
for (int64_t i = temp_count * preferred_chunk_size_in_kibyte_holds_integer_num; i < preferred_chunk_size_in_kibyte_holds_integer_num + temp_count * preferred_chunk_size_in_kibyte_holds_integer_num; i++)
{
// the repeated field large_data_collection is taking value from the large_vector
request_.add_large_data_collection(large_vector[i]);
}
temp_count++;
total_chunk--;
std::string ip_address = "localhost:50051";
auto channel = grpc::CreateChannel(ip_address, grpc::InsecureChannelCredentials());
std::unique_ptr<demo_grpc::AddressBook::Stub> stub = demo_grpc::AddressBook::NewStub(channel);
grpc::ClientContext context;
std::shared_ptr<::grpc::ClientReaderWriter< ::demo_grpc::Large_Data, ::demo_grpc::Large_Data> > stream(stub->Stream_Chunk_Service(&context));
// While the size of each chunk is eached then this repeated field is cleared. I am not sure before this
// value can be transfered to server or not. But my assumption is saying that it should be done
request_.clear_large_data_collection();
}
}
int main(int argc, char* argv[])
{
std::string client_address = "localhost:50051";
std::cout << "Address of client: " << client_address << std::endl;
// The following part for the Unary RPC
demo_grpc::C_Request query;
demo_grpc::S_Response result;
Client_Request(query);
// This part for the streaming chunk data (Bi directional Stream RPC)
demo_grpc::Large_Data stream_chunk_request_;
demo_grpc::Large_Data stream_chunk_response_;
uint64_t preferred_chunk_size_in_kibyte = 64;
Stream_Data_Chunk_Request(stream_chunk_request_, stream_chunk_response_, preferred_chunk_size_in_kibyte);
// Call
auto channel = grpc::CreateChannel(client_address, grpc::InsecureChannelCredentials());
std::unique_ptr<demo_grpc::AddressBook::Stub> stub = demo_grpc::AddressBook::NewStub(channel);
grpc::ClientContext context;
grpc::Status status = stub->GetAddress(&context, query, &result);
// the following status is for unary rpc as far I have understood the structure
if (status.ok())
{
Server_Response(query, result);
}
else
{
std::cout << status.error_message() << std::endl;
}
return 0;
}
heper function total_chunk_counter
#include <cmath>
uint32_t total_chunk_counter(uint64_t num_of_container_content,
uint64_t preferred_chunk_size_in_kibyte,
uint64_t &preferred_chunk_size_in_kibyte_holds_integer_num)
{
uint64_t cotainer_size_in_kibyte = (32ULL * num_of_container_content) / 1024;
preferred_chunk_size_in_kibyte_holds_integer_num = (num_of_container_content * preferred_chunk_size_in_kibyte) / cotainer_size_in_kibyte;
float total_chunk = static_cast<float>(num_of_container_content) / preferred_chunk_size_in_kibyte_holds_integer_num;
return std::ceil(total_chunk);
}
server.cpp which is totally incomplete
#include <myproto/big_data.pb.h>
#include <myproto/addressbook.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <grpcpp/server_builder.h>
#include <iostream>
class AddressBookService final : public demo_grpc::AddressBook::Service {
public:
virtual ::grpc::Status GetAddress(::grpc::ServerContext* context, const ::demo_grpc::C_Request* request, ::demo_grpc::S_Response* response)
{
switch (request->choose_area())
{
// do processing for unary rpc. Intentionally avoided here
std::cout << "Information of " << request->choose_area() << " is sent to Client" << std::endl;
return grpc::Status::OK;
}
// Bi-directional streaming chunk data
virtual ::grpc::Status Stream_Chunk_Service(::grpc::ServerContext* context, ::grpc::ServerReaderWriter< ::demo_grpc::Large_Data, ::demo_grpc::Large_Data>* stream)
{
// stream->Large_Data;
return grpc::Status::OK;
}
};
void RunServer()
{
std::cout << "grpc Version: " << grpc::Version() << std::endl;
std::string server_address = "localhost:50051";
std::cout << "Address of server: " << server_address << std::endl;
grpc::ServerBuilder builder;
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
AddressBookService my_service;
builder.RegisterService(&my_service);
std::unique_ptr<grpc::Server> server(builder.BuildAndStart());
server->Wait();
}
int main(int argc, char* argv[])
{
RunServer();
return 0;
}
In summary my desire
I need to pass the content of large_vector with the repeated field large_data_collection of message Large_Data. I should chunk the size of the large_vector and populate the repeated field large_data_collection with that chunk size
In server side all chunk will be concatenate by keeping the exact order of the large_vector. Some processing will be done on them (eg: double the value of each index). Then again whole data will be sent to the client as a chunk stream
Would be great if the present unary rpc don't wait for the completion of the bi-directional rpc
Solution with example would be really helpful. Advance thanks. The github repo is available here.
I am testing out using the maximilian library with JUCE. I am trying to use the maxiSample feature and I have implemented it exactly how the example code says to. Whenever I run the standalone app, I get the error "External Headphones (8): EXC_BAD_ACCESS (code=1, address=0x0)" and it gives me a breakpoint at line 747 of maximilian.cpp. It's not my headphones as it does the same thing with any playback device. Truly at a loss.
I've attached my MainComponent.cpp below. Any advice would be great, thank you!
#include "MainComponent.h"
#include "maximilian.h"
//==============================================================================
MainComponent::MainComponent()
{
// Make sure you set the size of the component after
// you add any child components.
setSize (800, 600);
// Some platforms require permissions to open input channels so request that here
if (juce::RuntimePermissions::isRequired (juce::RuntimePermissions::recordAudio)
&& ! juce::RuntimePermissions::isGranted (juce::RuntimePermissions::recordAudio))
{
juce::RuntimePermissions::request (juce::RuntimePermissions::recordAudio,
[&] (bool granted) { setAudioChannels (granted ? 2 : 0, 2); });
}
else
{
// Specify the number of input and output channels that we want to open
setAudioChannels (2, 2);
}
}
MainComponent::~MainComponent()
{
// This shuts down the audio device and clears the audio source.
shutdownAudio();
sample1.load("/Users/(username)/JuceTestPlugins/maxiSample/Source/kick.wav");
}
//==============================================================================
void MainComponent::prepareToPlay (int samplesPerBlockExpected, double sampleRate)
{
// This function will be called when the audio device is started, or when
// its settings (i.e. sample rate, block size, etc) are changed.
// You can use this function to initialise any resources you might need,
// but be careful - it will be called on the audio thread, not the GUI thread.
// For more details, see the help for AudioProcessor::prepareToPlay()
}
void MainComponent::getNextAudioBlock (const juce::AudioSourceChannelInfo& bufferToFill)
{
// Your audio-processing code goes here!
// For more details, see the help for AudioProcessor::getNextAudioBlock()
// Right now we are not producing any data, in which case we need to clear the buffer
// (to prevent the output of random noise)
//bufferToFill.clearActiveBufferRegion();
for(int sample = 0; sample < bufferToFill.buffer->getNumSamples(); ++sample){
//float sample2 = sample1.
//float wave = tesOsc.sinewave(200);
//double sample2 = sample1.play();
// leftSpeaker[sample] = (0.25 * wave);
// rightSpeaker[sample] = leftSpeaker[sample];
double *output;
output[0] = sample1.play();
output[1] = output[0];
}
}
void MainComponent::releaseResources()
{
// This will be called when the audio device stops, or when it is being
// restarted due to a setting change.
// For more details, see the help for AudioProcessor::releaseResources()
}
//==============================================================================
void MainComponent::paint (juce::Graphics& g)
{
// (Our component is opaque, so we must completely fill the background with a solid colour)
g.fillAll (getLookAndFeel().findColour (juce::ResizableWindow::backgroundColourId));
// You can add your drawing code here!
}
void MainComponent::resized()
{
// This is called when the MainContentComponent is resized.
// If you add any child components, this is where you should
// update their positions.
}
Can't say for sure, but couple of things catch my attention.
In getNextAudioBlock() you are dereferencing invalid pointers:
double *output;
output[0] = sample1.play();
output[1] = output[0];
The pointer variable output is uninitialised and will probably be filled with garbage or zeros, which will make the program read from invalid memory. This problem is most likely to cause the EXC_BAD_ACCESS. This method is called from the realtime audio thread, so you probably get a crash on a non-main thread (in this case the thread of External Headphones (8)).
It also is no clear to me what exactly it is you're trying to do here, so it's hard for me to say how it should be. What I can say is that assigning the result of sample1.play() to a double value looks suspicious.
Normally, when dealing with juce::AudioSourceChannelInfo you would get pointers to the audio buffers like so:
auto** bufferPointer = bufferToFill.buffer->getArrayOfWritePointers()
Further, you are loading a file inside the destructor of MainComponent. This at least is suspicious, why would you load a file during destruction?
MainComponent::~MainComponent()
{
// This shuts down the audio device and clears the audio source.
shutdownAudio();
sample1.load("/Users/(username)/JuceTestPlugins/maxiSample/Source/kick.wav");
}
I am trying to read parameters of Feature Detector (e.g.: SIFT) from an YAML file in OpenCV3.
I am trying to use the propose code in the Documentation. But it does not compile at all. So, I make it to compile, changing a little
#include "opencv2/opencv.hpp"
#include <opencv2/core/persistence.hpp>
#include "opencv2/xfeatures2d.hpp"
using namespace cv::xfeatures2d;
int main () {
cv::Ptr<cv::Feature2D> surf = SURF::create();
cv::FileStorage fs("../surf_params.yaml", cv::FileStorage::WRITE);
if( fs.isOpened() ) // if we have file with parameters, read them
{
std::cout << "reading parameters" << std::endl;
surf->read(fs["surf_params"]);
}
else // else modify the parameters and store them; user can later edit the file to use different parameters
{
std::cout << "writing parameters" << std::endl;
cv::Ptr<cv::xfeatures2d::SURF> aux_ptr;
aux_ptr = surf.dynamicCast<cv::xfeatures2d::SURF>();
aux_ptr->setNOctaves(3); // lower the contrast threshold, compared to the default value
{
cv::internal::WriteStructContext ws(fs, "surf_params", CV_NODE_MAP);
aux_ptr->write(fs);
}
}
fs.release();
// cv::Mat image = cv::imread("myimage.png", 0), descriptors;
// std::vector<cv::KeyPoint> keypoints;
// sift->detectAndCompute(image, cv::noArray(), keypoints, descriptors);
return 0;
}
But the the parameters are not read or wrote at all.
I also checked this Transition Guide, and in section "Algorithm interfaces" says that:
General algorithm usage pattern has changed: now it must be created on heap wrapped in smart pointer cv::Ptr. Version 2.4 allowed both stack and heap allocations, directly or via smart pointer.
get and set methods have been removed from the cv::Algorithm class along with CV_INIT_ALGORITHM macro. In 3.0 all properties have been converted to the pairs of getProperty/setProperty pure virtual methods. As a result it is not possible to create and use cv::Algorithm instance by name (using generic Algorithm::create(String) method), one should call corresponding factory method explicitly.
Maybe this means that it is not possible to read and write parameters from an XML/YAML files with read() and write() functions.
Can you give me some axample of how can I read parameters of Feature Detector algorithm from XML/YAML file in OpenCV3?
Thanks in advance!
If the specific algorithm overrides the methods cv::Algorithm::read and cv::Algorithm::write, then you can use then as described, for example, here.
However, cv::xfeatures2d::SURF doesn't overrides these methods, so you can't use this approach.
You can, however, store the properties you need to modify in a FileStorage, read and write them as usual, and modify the SURF object:
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp>
#include <opencv2/xfeatures2d/nonfree.hpp>
#include <iostream>
int main()
{
cv::Ptr<cv::Feature2D> surf = cv::xfeatures2d::SURF::create();
{
// Try to read from file
cv::FileStorage fs("surf_params.yaml", cv::FileStorage::READ);
if (fs.isOpened())
{
std::cout << "reading parameters" << std::endl;
// Read the parameters
int nOctaves = fs["nOctaves"];
surf.dynamicCast<cv::xfeatures2d::SURF>()->setNOctaves(nOctaves);
}
else
{
// Close the file in READ mode
fs.release();
// Open the file in WRITE mode
cv::FileStorage fs("surf_params.yaml", cv::FileStorage::WRITE);
std::cout << "writing parameters" << std::endl;
fs << "nOctaves" << 3;
// fs in WRITE mode automatically released
}
// fs in READ mode automatically released
}
}
You can make the surf object a pointer to cv::xfeatures2d::SURF to avoid casts:
cv::Ptr<cv::xfeatures2d::SURF> surf = ...
If you need to support different Features2D, you can store in the FileStorage also an identifier for the particular feature, such as:
fs << "Type" << "SURF";
and then conditionally read the options to restore its properties:
string type;
FileNode fn_type = fs.root();
type = fn_type["Type"];
if(type == "SURF") {
// Read SURF properties...
} else if(type == "SIFT") {
// Read SIFT properties...
}
I'm trying to make an audio plugin which can connect to a local Java server and send it data through a socket (TCP). As I heard many nice things about it, I'm using Boost's ASIO library to do the work.
I'm having quite a strange bug in my code : my AudioUnit C++ client (which I use from inside a DAW, I'm testing with Ableton Live and Logic Pro) can connect to my Java server alright, but when I do a write operation, it seems my write is correctly executed only once (as in, I can monitor any incoming message on my Java server, and only the first message is seen)
I'm using the following code :
-- Inside the header :
boost::asio::io_service io_service;
boost::asio::ip::tcp::socket mySocket(io_service);
boost::asio::ip::tcp::endpoint myEndpoint(boost::asio::ip::address::from_string("127.0.0.1"), 9001);
boost::system::error_code ignored_error;
-- Inside my plugin's constructor
mySocket.connect(myEndpoint);
-- And when I try to send :
boost::asio::write(mySocket, boost::asio::buffer(datastring), ignored_error);
(you will notice that I do not close my socket, because I'd like it to live forever)
I don't think the problem comes from my Java server (though I could be wrong !), because I found out a way to make my C++ plugin "work correctly" and send all the messages I want :
If I don't open my socket upon initializing my plugin, but directly when I try sending the message, every message is received by my remote server. Ie, every time I call sendMessage(), I do the following :
try {
// Connect to the Java application
mySocket.connect(myEndpoint);
// Write the data
boost::asio::write(mySocket, boost::asio::buffer(datastring), ignored_error);
// Disconnect
mySocket.close();
} catch (const std::exception & e) {std::cout << "Couldn't initialize socket\n";}
Still, I'm not too happy with this code : I have to send about 1000 messages per second - while that might not be humongous, but I don't think opening the socket and connecting to the end point everytime is efficient (it's a blocking operation too)
Any input which could lead me in the right direction would be greatly appreciated !
For more information, here's my code in a slightly more complete version (with the useless stuff trimmed to keep it short)
#include <cstdlib>
#include <fstream>
#include "PluginProcessor.h"
#include "PluginEditor.h"
#include "SignalMessages.pb.h"
using boost::asio::local::stream_protocol;
//==============================================================================
// Default parameter values
const int defaultAveragingBufferSize = 256;
const int defaultMode = 0;
const float defaultInputSensitivity = 1.0;
const int defaultChannel = 1;
const int defaultMonoStereo = 1; //Mono processing
//==============================================================================
// Variables used by the audio algorithm
int nbBufValProcessed = 0;
float signalSum = 0;
// Used for beat detection
float signalAverageEnergy = 0;
float signalInstantEnergy = 0;
const int thresholdFactor = 5;
const int averageEnergyBufferSize = 11025; //0.25 seconds
//==============================================================================
// Socket used to forward data to the Processing application, and the variables associated with it
boost::asio::io_service io_service;
boost::asio::ip::tcp::socket mySocket(io_service);
boost::asio::ip::tcp::endpoint myEndpoint(boost::asio::ip::address::from_string("127.0.0.1"), 9001);
boost::system::error_code ignored_error;
//==============================================================================
SignalProcessorAudioProcessor::SignalProcessorAudioProcessor()
{
averagingBufferSize = defaultAveragingBufferSize;
inputSensitivity = defaultInputSensitivity;
mode = defaultMode;
monoStereo = defaultMonoStereo;
channel = defaultChannel;
// Connect to the remote server
// Note for stack overflow : this is where I'd like connect to my server !
mySocket.connect(myEndpoint);
}
SignalProcessorAudioProcessor::~SignalProcessorAudioProcessor()
{
}
//==============================================================================
void SignalProcessorAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
// In case we have more outputs than inputs, clear any output
// channels that doesn't contain input data
for (int i = getNumInputChannels(); i < getNumOutputChannels(); ++i)
buffer.clear (i, 0, buffer.getNumSamples());
//////////////////////////////////////////////////////////////////
// This is the most important part of my code, audio processing takes place here !
// Note for stack overflow : this shouldn't be very interesting, as it is not related to my current problem
for (int channel = 0; channel < std::getNumInputChannels(); ++channel)
{
const float* channelData = buffer.getReadPointer (channel);
for (int i=0; i<buffer.getNumSamples(); i++) {
signalSum += std::abs(channelData[i]);
signalAverageEnergy = ((signalAverageEnergy * (averageEnergyBufferSize-1)) + std::abs(channelData[i])) / averageEnergyBufferSize;
}
}
nbBufValProcessed += buffer.getNumSamples();
if (nbBufValProcessed >= averagingBufferSize) {
signalInstantEnergy = signalSum / (averagingBufferSize * monoStereo);
// If the instant signal energy is thresholdFactor times greater than the average energy, consider that a beat is detected
if (signalInstantEnergy > signalAverageEnergy*thresholdFactor) {
//Set the new signal Average Energy to the value of the instant energy, to avoid having bursts of false beat detections
signalAverageEnergy = signalInstantEnergy;
//Create an impulse signal - note for stack overflow : these are Google Protocol buffer messages, serialization is faster this way
Impulse impulse;
impulse.set_signalid(channel);
std::string datastringImpulse;
impulse.SerializeToString(&datastringImpulse);
sendMessage(datastringImpulse);
}
nbBufValProcessed = 0;
signalSum = 0;
}
}
//==============================================================================
void SignalProcessorAudioProcessor::sendMessage(std::string datastring) {
try {
// Write the data
boost::asio::write(mySocket, boost::asio::buffer(datastring), ignored_error);
} catch (const std::exception & e) {
std::cout << "Caught an error while trying to initialize the socket - the Java server might not be ready\n";
std::cerr << e.what();
}
}
//==============================================================================
// This creates new instances of the plugin..
AudioProcessor* JUCE_CALLTYPE createPluginFilter()
{
return new SignalProcessorAudioProcessor();
}
Has anybody successfully implemented an Instrument using MoMu STK on iOS? I am bit stacked with initialization of a stream for Instrument.
I am using tutorial code and looks like something missing
RtAudio dac;
// Figure out how many bytes in an StkFloat and setup the RtAudio stream.
RtAudio::StreamParameters parameters;
parameters.deviceId = dac.getDefaultOutputDevice();
parameters.nChannels = 1;
RtAudioFormat format = ( sizeof(StkFloat) == 8 ) ? RTAUDIO_FLOAT64 : RTAUDIO_FLOAT32;
unsigned int bufferFrames = RT_BUFFER_SIZE;
dac.openStream( & parameters, NULL, format, (unsigned int)Stk::sampleRate(), &bufferFrames, &tick, (void *)&data );
Error description says that output parameters for output device are invalid, but when I skip to assign device id then it's not working as well.
Any idea would be great.
RtAudio is only for desktop apps and there is no need to open stream when implementing on iOS.
example:
Header file:
#import "Simple.h"
// make struct to hold
struct TickData {
Simple *synth;
};
// Make instance of the struct in #interface=
TickData data;
Implementation file:
// init the synth:
data.synth = new Simple();
data.synth->keyOff();
// to trigger note on/off:
data.synth->noteOn(frequency, velocity);
data.synth->noteOff(velocity);
// audio callback method:
for (int i=0; i < FRAMESIZE; i++) {
buffer[i] = data.synth -> tick();
}
Yep, I have a couple of apps in the store with STK classes running on them. Bear in mind that the setup required to run STK on iOS is different from the one required to run it on your desktop.
Here's a tutorial on how to use STK classes inside an iOS app:
https://arielelkin.github.io/articles/mandolin