Some context to my problem:
I need to establish an inter-process communication using C++ and sockets and I picked NNG library for that along with nngpp c++ wrapper. I need to use push/pull protocol so no contexts handling is available to me. I wrote some code based on raw example from nngpp demo. The difference here is that, by using push/pull protocol I split this into two separate programs. One for sending and one for receiving.
Problem descripion:
I need to receive let's say a thousand or more messages per second. For now, all messages are captured only when I send about 50/s. That is way too slow and I do believe it can be done faster. The faster I send, the more I lose. At the moment, when sending 1000msg/s I lose about 150 msgs.
Some words about the code
The code may be in C++17 standard. It is written in object-oriented manner so in the end I want to have a class with "receive" method that would simply give me the received messages. For now, I just print the results on screen. Below, I supply some parts of the project with descriptions:
NOTE msgItem is a struct like that:
struct msgItem {
nng::aio aio;
nng::msg msg;
nng::socket_view itemSock;
explicit msgItem(nng::socket_view sock) : itemSock(sock) {}
};
And it is taken from example mentioned above.
Callback function that is executed when message is received by one of the aio's (callback is passed in constructor of aio object). It aims at checking whether everything was ok with transmission, retrieving my Payload (just string for now) and passing it to queue while a flag is set. Then I want to print those messages from the queue using separate thread.
void ReceiverBase<Payload>::aioCallback(void *arg) try {
msgItem *msgItem = (struct msgItem *)arg;
Payload retMsg{};
auto result = msgItem->aio.result();
if (result != nng::error::success) {
throw nng::exception(result);
}
//Here we extract the message
auto msg = msgItem->aio.release_msg();
auto const *data = static_cast<typename Payload::value_type *>(msg.body().data());
auto const count = msg.body().size()/sizeof(typename Payload::value_type);
std::copy(data, data + count, std::back_inserter(retMsg));
{
std::lock_guard<std::mutex> lk(m_msgMx);
newMessageFlag = true;
m_messageQueue.push(std::move(retMsg));
}
msgItem->itemSock.recv(msgItem->aio);
} catch (const nng::exception &e) {
fprintf(stderr, "server_cb: %s: %s\n", e.who(), e.what());
} catch (...) {
fprintf(stderr, "server_cb: unknown exception\n");
}
Separate thread for listening to the flag change and printing. While loop at the end is for continuous work of the program. I use msgCounter to count successful message receival.
void ReceiverBase<Payload>::start() {
auto listenerLambda = [](){
std::string temp;
while (true) {
std::lock_guard<std::mutex> lg(m_msgMx);
if(newMessageFlag) {
temp = std::move(m_messageQueue.front());
m_messageQueue.pop();
++msgCounter;
std::cout << msgCounter << "\n";
newMessageFlag = false;
}}};
std::thread listenerThread (listenerLambda);
while (true) {
std::this_thread::sleep_for(std::chrono::microseconds(1));
}
}
This is my sender application. I tweak the frequency of msg sending by changing the value in std::chrono::miliseconds(val).
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::string longMsg{" here normally I have some long test text"};
std::cout << "Trying connecting sender:";
StringSender sender(connection_address);
sender.setupConnection();
for (int i=0; i<1000; ++i) {
std::this_thread::sleep_for(std::chrono::milliseconds(3));
sender.send(longMsg);
}
}
And this is receiver:
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::cout << "Trying connecting receiver:";
StringReceiver receiver(connection_address);
receiver.setupConnection();
std::cout<< "Connection set up. \n";
receiver.start();
return 0;
}
Nothing speciall in those two applications as You see. the setup method from StringReciver is something like that:
bool ReceiverBase<Payload>::setupConnection() {
m_connected = false;
try {
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i) = std::make_unique<msgItem>(m_sock);
m_msgItems.at(i)->aio =
nng::aio(ReceiverBase::aioCallback, m_msgItems.at(i).get());
}
m_sock.listen(m_adress.c_str());
m_connected = true;
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i)->itemSock.recv(m_msgItems.at(i)->aio);
}
} catch (const nng::exception &e) {
printf("%s: %s\n", e.who(), e.what());
}
return m_connected;
}
Do You have any suggestions why the performance is so low? Do I use lock_guards properly here? What I want them to do is basically lock the flag and queue so only one side has access to it.
NOTE: Adding more listeners thread does not affect the performance either way.
NOTE2: newMessageFlag is atomic
Related
From a source I am getting stream data which size will not be known before the final processing, but the minimum is 10 GB. I have to send this large amount of data using gRPC.
Need to mention here, this large amount data will be passed through the gRPC while the processing of the streaming is done. In this step, I have thought to store all the value in a vector.
Regarding sending large amount of data I have tried to get idea and found:
This where it is mentioned not to pass large data using gRPC. Here, mentioned to use any other message protocol where I have limitation to use something else rather than gRPC(at least till today).
From this post I have tried to know how chunk message can be sent but I am not sure is it related to my problem or not.
First post where I have found a blog to stream data using go language.
This one the presentation using python language of this post. But it is also incomplete.
gRPC example could be a good start bt cannot decode due to lack of C++ knowledge
From there, a huge Update I have done in the question. But the main theme of the question is not changed
What I have done so far and some points about my project. The github repo is available here.
A Unary rpc is present in the project
I know that my new Bi directional rpc will take some time. I want that the Unary rpc will not wait for the completion of the Bi directional rpc. Right now I am thinking in a synchronous way where Unary rpc is waiting to pass it's status for the streaming one completion.
I am avoiding the unnecessary lines in C++ code. But giving whole proto files
big_data.proto
syntax = "proto3";
package demo_grpc;
message Large_Data {
repeated int32 large_data_collection = 1 [packed=true];
int32 data_chunk_number = 2;
}
addressbook.proto
syntax = "proto3";
package demo_grpc;
import "myproto/big_data.proto";
message S_Response {
string name = 1;
string street = 2;
string zip = 3;
string city = 4;
string country = 5;
int32 double_init_val = 6;
}
message C_Request {
uint32 choose_area = 1;
string name = 2;
int32 init_val = 3;
}
service AddressBook {
rpc GetAddress(C_Request) returns (S_Response) {}
rpc Stream_Chunk_Service(stream Large_Data) returns (stream Large_Data) {}
}
client.cpp
#include <big_data.pb.h>
#include <addressbook.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <grpcpp/create_channel.h>
#include <iostream>
#include <numeric>
using namespace std;
// This function prompts the user to set value for the required area
void Client_Request(demo_grpc::C_Request &request_)
{
// do processing for unary rpc. Intentionally avoided here
}
// According to Client Request this function display the value of protobuf message
void Server_Response(demo_grpc::C_Request &request_, const demo_grpc::S_Response &response_)
{
// do processing for unary rpc. Intentionally avoided here
}
// following function make large vector and then chunk to send via stream from client to server
void Stream_Data_Chunk_Request(demo_grpc::Large_Data &request_,
demo_grpc::Large_Data &response_,
uint64_t preferred_chunk_size_in_kibyte)
{
// A dummy vector which in real case will be the large data set's container
std::vector<int32_t> large_vector;
// irerate it now for 1024*10 times
for(int64_t i = 0; i < 1024 * 10; i++)
{
large_vector.push_back(1);
}
uint64_t preferred_chunk_size_in_kibyte_holds_integer_num = 0; // 1 chunk how many intger will contain that num will come here
// total chunk number will be updated here
uint32_t total_chunk = total_chunk_counter(large_vector.size(), preferred_chunk_size_in_kibyte, preferred_chunk_size_in_kibyte_holds_integer_num);
// A temp counter to trace the index of the large_vector
int32_t temp_count = 0;
// loop will start if the total num of chunk is greater than 0. After each iteration total_chunk will be decremented
while(total_chunk > 0)
{
for (int64_t i = temp_count * preferred_chunk_size_in_kibyte_holds_integer_num; i < preferred_chunk_size_in_kibyte_holds_integer_num + temp_count * preferred_chunk_size_in_kibyte_holds_integer_num; i++)
{
// the repeated field large_data_collection is taking value from the large_vector
request_.add_large_data_collection(large_vector[i]);
}
temp_count++;
total_chunk--;
std::string ip_address = "localhost:50051";
auto channel = grpc::CreateChannel(ip_address, grpc::InsecureChannelCredentials());
std::unique_ptr<demo_grpc::AddressBook::Stub> stub = demo_grpc::AddressBook::NewStub(channel);
grpc::ClientContext context;
std::shared_ptr<::grpc::ClientReaderWriter< ::demo_grpc::Large_Data, ::demo_grpc::Large_Data> > stream(stub->Stream_Chunk_Service(&context));
// While the size of each chunk is eached then this repeated field is cleared. I am not sure before this
// value can be transfered to server or not. But my assumption is saying that it should be done
request_.clear_large_data_collection();
}
}
int main(int argc, char* argv[])
{
std::string client_address = "localhost:50051";
std::cout << "Address of client: " << client_address << std::endl;
// The following part for the Unary RPC
demo_grpc::C_Request query;
demo_grpc::S_Response result;
Client_Request(query);
// This part for the streaming chunk data (Bi directional Stream RPC)
demo_grpc::Large_Data stream_chunk_request_;
demo_grpc::Large_Data stream_chunk_response_;
uint64_t preferred_chunk_size_in_kibyte = 64;
Stream_Data_Chunk_Request(stream_chunk_request_, stream_chunk_response_, preferred_chunk_size_in_kibyte);
// Call
auto channel = grpc::CreateChannel(client_address, grpc::InsecureChannelCredentials());
std::unique_ptr<demo_grpc::AddressBook::Stub> stub = demo_grpc::AddressBook::NewStub(channel);
grpc::ClientContext context;
grpc::Status status = stub->GetAddress(&context, query, &result);
// the following status is for unary rpc as far I have understood the structure
if (status.ok())
{
Server_Response(query, result);
}
else
{
std::cout << status.error_message() << std::endl;
}
return 0;
}
heper function total_chunk_counter
#include <cmath>
uint32_t total_chunk_counter(uint64_t num_of_container_content,
uint64_t preferred_chunk_size_in_kibyte,
uint64_t &preferred_chunk_size_in_kibyte_holds_integer_num)
{
uint64_t cotainer_size_in_kibyte = (32ULL * num_of_container_content) / 1024;
preferred_chunk_size_in_kibyte_holds_integer_num = (num_of_container_content * preferred_chunk_size_in_kibyte) / cotainer_size_in_kibyte;
float total_chunk = static_cast<float>(num_of_container_content) / preferred_chunk_size_in_kibyte_holds_integer_num;
return std::ceil(total_chunk);
}
server.cpp which is totally incomplete
#include <myproto/big_data.pb.h>
#include <myproto/addressbook.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <grpcpp/server_builder.h>
#include <iostream>
class AddressBookService final : public demo_grpc::AddressBook::Service {
public:
virtual ::grpc::Status GetAddress(::grpc::ServerContext* context, const ::demo_grpc::C_Request* request, ::demo_grpc::S_Response* response)
{
switch (request->choose_area())
{
// do processing for unary rpc. Intentionally avoided here
std::cout << "Information of " << request->choose_area() << " is sent to Client" << std::endl;
return grpc::Status::OK;
}
// Bi-directional streaming chunk data
virtual ::grpc::Status Stream_Chunk_Service(::grpc::ServerContext* context, ::grpc::ServerReaderWriter< ::demo_grpc::Large_Data, ::demo_grpc::Large_Data>* stream)
{
// stream->Large_Data;
return grpc::Status::OK;
}
};
void RunServer()
{
std::cout << "grpc Version: " << grpc::Version() << std::endl;
std::string server_address = "localhost:50051";
std::cout << "Address of server: " << server_address << std::endl;
grpc::ServerBuilder builder;
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
AddressBookService my_service;
builder.RegisterService(&my_service);
std::unique_ptr<grpc::Server> server(builder.BuildAndStart());
server->Wait();
}
int main(int argc, char* argv[])
{
RunServer();
return 0;
}
In summary my desire
I need to pass the content of large_vector with the repeated field large_data_collection of message Large_Data. I should chunk the size of the large_vector and populate the repeated field large_data_collection with that chunk size
In server side all chunk will be concatenate by keeping the exact order of the large_vector. Some processing will be done on them (eg: double the value of each index). Then again whole data will be sent to the client as a chunk stream
Would be great if the present unary rpc don't wait for the completion of the bi-directional rpc
Solution with example would be really helpful. Advance thanks. The github repo is available here.
I am working on a game server that uses sockets and implemented a polling function that sends the message "[POLL]" over all player sockets in a lobby every second to notify the player clients that their connection is still alive.
If I disconnect on the client-side the socket is still polled with no errors, however, if I create a new connection with the same client (Gets a new FD and is added to the map as a second player), the whole server crashes without any exceptions/warnings/messages when it attempts to write to the previous socket FD. My call to Write on the socket is wrapped in a try/catch that doesn't catch any exceptions and, when debugging using gdb, I am not given any error messaging.
This is the Socket Write function:
int Socket::Write(ByteArray const& buffer)
{
if (!open)
{
return -1;
}
// Convert buffer to raw char array
char* raw = new char[buffer.v.size()];
for (int i=0; i < buffer.v.size(); i++)
{
raw[i] = buffer.v[i];
}
// Perform the write operation
int returnValue = write(GetFD(), raw, buffer.v.size()); // <- Crashes program
if (returnValue <= 0)
{
open = false;
}
return returnValue;
}
And this is the Poll function (Players are stored in a map of uint -> Socket*):
/*
Polls all connected players to tell them
to keep their connections alive.
*/
void Lobby::Poll()
{
playerMtx.lock();
for (auto it = players.begin(); it != players.end(); it++)
{
try
{
if (it->second != nullptr && it->second->IsOpen())
{
it->second->Write("[POLL]");
}
}
catch (...)
{
std::cout << "Failed to write to " << it->first << std::endl;
}
}
playerMtx.unlock();
}
I would expect to see the "Failed to write to " message but instead the entire server program exits with no messaging. What could be happening here?
I was unable to find a reason for the program crashing in the call to write but I was able to find a workaround.
I perform a poll operation on the file descriptor prior to calling write and I query the POLLNVAL event. If I receive a nonzero value, the FD is now invalid.
// Check if FD is valid
struct pollfd pollFd;
pollFd.fd = GetFD();
pollFd.events = POLLNVAL;
if (poll(&pollFd, 1, 0) > 0)
{
open = false;
return -1;
}
Here I have a program that wants to
detect whether if it's the only instance
1.1. it does that by trying to create a Unix Domain Socket
and trying to binding it to a specific address.
if a duplicate program is not running, establish an UDS
and then listen to the socket.
2.1. if any message comes through that socket, the program will log the incoming message
2.2. otherwise it should keep listening to the socket forever
if there's a duplicate program it should send a message and then exit.
Here's what I have:
import std.socket, std.experimental.logger;
immutable string socketAddress = "\0/tmp/com.localserver.myapp";
void main()
{
auto socket = new std.socket.Socket(std.socket.AddressFamily.UNIX,
std.socket.SocketType.STREAM);
auto addr = new std.socket.UnixAddress(socketAddress);
auto isUnique = () {
bool result;
scope (success)
log("returns: ", result);
try
{
socket.bind(addr);
result = true;
}
catch (std.socket.SocketOSException e)
result = false;
// else throw error
return result;
}();
if (isUnique)
{
log("Unique instance detected. Listening...");
// works upto now
char[] buffer = [];
while (1)
{
socket.listen(0);
socket.receive(buffer);
if (buffer != []) {
log("Received message: ", buffer);
}
buffer = [];
}
}
else
{
log("Duplicate instance detected.");
socket.connect(addr);
import std.stdio;
stdout.write("Enter your message:\t");
socket.send(readln());
log("Message has been sent. Exiting.");
}
}
The documentation does not seem very friendly to those who does not have any experience in socket programming. How can I send and receive message with std.socket.Socket?
After binding, you actually need to accept. It will return a new Socket instance which you can actually receive from. Your client side branch looks ok. I think that is your key mistake here.
I also have a code sample in my book that shows basic functionality of std.socket which can help as an example:
http://arsdnet.net/dcode/book/chapter_02/03/
it is tcp, but making it unix just means changing the family, like you already did in your code.
You can also look up socket tutorials for C and so on, the D socket is just a thin wrapper around those same BSD style socket functions.
As Adam pointed out I had use listen() method first and then apply the accept() method which returns a socket that can receive message. Then the receiver socket takes a char[N] buffer.
import std.socket, std.experimental.logger;
class UDSIPC
{
private:
static immutable string socketAddress = "\0/tmp/com.localserver.myapp";
static immutable size_t messageBufferSize = 64;
static immutable string socketAddressName = "\0/tmp/com.localserver.myapp";
Socket socket;
UnixAddress uaddr;
public:
this(in string socketAddressName = socketAddressName)
{
socket = new Socket(AddressFamily.UNIX, SocketType.STREAM);
uaddr = new UnixAddress(socketAddress);
}
bool getUniqueness()
{
bool result;
scope (success)
log("returns: ", result);
try
{
socket.bind(uaddr);
result = true;
}
catch (SocketOSException e)
result = false;
// else throw error
return result;
}
string getMessage()
{
socket.listen(0);
auto receiverSocket = socket.accept();
char[messageBufferSize] buffer;
auto amount = receiverSocket.receive(buffer);
import std.string;
return format!"%s"(buffer[0 .. amount]);
}
void sendMessage(in string message)
{
socket.connect(uaddr);
socket.send(message);
}
}
void main()
{
auto ipc = new UDSIPC();
if (ipc.getUniqueness())
{
while (true)
{
log(ipc.getMessage());
}
}
else
{
import std.stdio, std.string;
ipc.sendMessage(readln().chomp());
}
}
I am writing a C++ SNMP server using a NET-SNMP library. I read the documentation and still got one question. Can multiple threads sharing single snmp session and using it in procedures like snmp_sess_synch_response() simultaneously, or I must init and open new session in each thread?
Well, when I am trying to snmp_sess_synch_response() from two different threads using the same opaque session pointer simultaneously, one of three errors always occures. The first is memory access violation, the second is endless WaitForSingleObject() in both threads and the third is heap allocation error.
I suppose I can treat this as an answer, thus sharing single session between multiple threads is unsafe, because using it in procedures like snmp_sess_synch_response() simultaneously will cause an errors.
P.S. Here is the piece of code of described before:
void* _opaqueSession;
boost::mutex _sessionMtx;
std::shared_ptr<netsnmp_pdu> ReadObjectValue(Oid& objectID)
{
netsnmp_pdu* requestPdu = snmp_pdu_create(SNMP_MSG_GET);
netsnmp_pdu* response = 0;
snmp_add_null_var(requestPdu, objectID.GetObjId(), objectID.GetLen());
void* opaqueSessionCopy;
{
//Locks the _opaqueSession, wherever it appears
boost::mutex::scoped_lock lock(_sessionMtx);
opaqueSessionCopy = _opaqueSession;
}
//Errors here!
snmp_sess_synch_response(opaqueSessionCopy, requestPdu, &response);
std::shared_ptr<netsnmp_pdu> result(response);
return result;
}
void ExecuteThread1()
{
Oid sysName(".1.3.6.1.2.1.1.5.0");
try
{
while(true)
{
boost::thread::interruption_pont();
ReadObjectValue(sysName);
}
}
catch(...)
{}
}
void ExecuteThread2()
{
Oid sysServices(".1.3.6.1.2.1.1.7.0");
try
{
while(true)
{
boost::thread::interruption_pont();
ReadObjectValue(sysServices);
}
}
catch(...)
{}
}
int main()
{
std::string community = "public";
std::string ipAddress = "127.0.0.1";
snmp_session session;
{
SNMP::snmp_sess_init(&session);
session.timeout = 500000;
session.retries = 0;
session.version = SNMP_VERSION_2c;
session.remote_port = 161;
session.peername = (char*)ipAddress.c_str();
session.community = (u_char*)community.c_str();
session.community_len = community.size();
}
_opaqueSession = snmp_sess_open(&session);
boost::thread thread1 = boost::thread(&ExecuteThread1);
boost::thread thread2 = boost::thread(&ExecuteThread2);
boost::this_thread::sleep(boost::posix_time::seconds::seconds(30));
thread1.interrupt();
thread1.join();
thread2.interrupt();
thread2.join();
return 0;
}
I'm trying to make an audio plugin which can connect to a local Java server and send it data through a socket (TCP). As I heard many nice things about it, I'm using Boost's ASIO library to do the work.
I'm having quite a strange bug in my code : my AudioUnit C++ client (which I use from inside a DAW, I'm testing with Ableton Live and Logic Pro) can connect to my Java server alright, but when I do a write operation, it seems my write is correctly executed only once (as in, I can monitor any incoming message on my Java server, and only the first message is seen)
I'm using the following code :
-- Inside the header :
boost::asio::io_service io_service;
boost::asio::ip::tcp::socket mySocket(io_service);
boost::asio::ip::tcp::endpoint myEndpoint(boost::asio::ip::address::from_string("127.0.0.1"), 9001);
boost::system::error_code ignored_error;
-- Inside my plugin's constructor
mySocket.connect(myEndpoint);
-- And when I try to send :
boost::asio::write(mySocket, boost::asio::buffer(datastring), ignored_error);
(you will notice that I do not close my socket, because I'd like it to live forever)
I don't think the problem comes from my Java server (though I could be wrong !), because I found out a way to make my C++ plugin "work correctly" and send all the messages I want :
If I don't open my socket upon initializing my plugin, but directly when I try sending the message, every message is received by my remote server. Ie, every time I call sendMessage(), I do the following :
try {
// Connect to the Java application
mySocket.connect(myEndpoint);
// Write the data
boost::asio::write(mySocket, boost::asio::buffer(datastring), ignored_error);
// Disconnect
mySocket.close();
} catch (const std::exception & e) {std::cout << "Couldn't initialize socket\n";}
Still, I'm not too happy with this code : I have to send about 1000 messages per second - while that might not be humongous, but I don't think opening the socket and connecting to the end point everytime is efficient (it's a blocking operation too)
Any input which could lead me in the right direction would be greatly appreciated !
For more information, here's my code in a slightly more complete version (with the useless stuff trimmed to keep it short)
#include <cstdlib>
#include <fstream>
#include "PluginProcessor.h"
#include "PluginEditor.h"
#include "SignalMessages.pb.h"
using boost::asio::local::stream_protocol;
//==============================================================================
// Default parameter values
const int defaultAveragingBufferSize = 256;
const int defaultMode = 0;
const float defaultInputSensitivity = 1.0;
const int defaultChannel = 1;
const int defaultMonoStereo = 1; //Mono processing
//==============================================================================
// Variables used by the audio algorithm
int nbBufValProcessed = 0;
float signalSum = 0;
// Used for beat detection
float signalAverageEnergy = 0;
float signalInstantEnergy = 0;
const int thresholdFactor = 5;
const int averageEnergyBufferSize = 11025; //0.25 seconds
//==============================================================================
// Socket used to forward data to the Processing application, and the variables associated with it
boost::asio::io_service io_service;
boost::asio::ip::tcp::socket mySocket(io_service);
boost::asio::ip::tcp::endpoint myEndpoint(boost::asio::ip::address::from_string("127.0.0.1"), 9001);
boost::system::error_code ignored_error;
//==============================================================================
SignalProcessorAudioProcessor::SignalProcessorAudioProcessor()
{
averagingBufferSize = defaultAveragingBufferSize;
inputSensitivity = defaultInputSensitivity;
mode = defaultMode;
monoStereo = defaultMonoStereo;
channel = defaultChannel;
// Connect to the remote server
// Note for stack overflow : this is where I'd like connect to my server !
mySocket.connect(myEndpoint);
}
SignalProcessorAudioProcessor::~SignalProcessorAudioProcessor()
{
}
//==============================================================================
void SignalProcessorAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
// In case we have more outputs than inputs, clear any output
// channels that doesn't contain input data
for (int i = getNumInputChannels(); i < getNumOutputChannels(); ++i)
buffer.clear (i, 0, buffer.getNumSamples());
//////////////////////////////////////////////////////////////////
// This is the most important part of my code, audio processing takes place here !
// Note for stack overflow : this shouldn't be very interesting, as it is not related to my current problem
for (int channel = 0; channel < std::getNumInputChannels(); ++channel)
{
const float* channelData = buffer.getReadPointer (channel);
for (int i=0; i<buffer.getNumSamples(); i++) {
signalSum += std::abs(channelData[i]);
signalAverageEnergy = ((signalAverageEnergy * (averageEnergyBufferSize-1)) + std::abs(channelData[i])) / averageEnergyBufferSize;
}
}
nbBufValProcessed += buffer.getNumSamples();
if (nbBufValProcessed >= averagingBufferSize) {
signalInstantEnergy = signalSum / (averagingBufferSize * monoStereo);
// If the instant signal energy is thresholdFactor times greater than the average energy, consider that a beat is detected
if (signalInstantEnergy > signalAverageEnergy*thresholdFactor) {
//Set the new signal Average Energy to the value of the instant energy, to avoid having bursts of false beat detections
signalAverageEnergy = signalInstantEnergy;
//Create an impulse signal - note for stack overflow : these are Google Protocol buffer messages, serialization is faster this way
Impulse impulse;
impulse.set_signalid(channel);
std::string datastringImpulse;
impulse.SerializeToString(&datastringImpulse);
sendMessage(datastringImpulse);
}
nbBufValProcessed = 0;
signalSum = 0;
}
}
//==============================================================================
void SignalProcessorAudioProcessor::sendMessage(std::string datastring) {
try {
// Write the data
boost::asio::write(mySocket, boost::asio::buffer(datastring), ignored_error);
} catch (const std::exception & e) {
std::cout << "Caught an error while trying to initialize the socket - the Java server might not be ready\n";
std::cerr << e.what();
}
}
//==============================================================================
// This creates new instances of the plugin..
AudioProcessor* JUCE_CALLTYPE createPluginFilter()
{
return new SignalProcessorAudioProcessor();
}