No sound when using SDL_Mixer in C++/Linux - c++

I'm trying to use SDL_mixer in C++ under Linux to play sounds asynchronously, but it somehow doesn't work. When I execute it, no sound is playing at all. I'm not quite familiar with SDL and classes though, so it would be very helpful if someone could detect where the error in my code is.
My header file (sample.h):
#pragma once
#include <string>
#include <memory>
#include "SDL_mixer.h"
class sample {
public:
sample(const std::string &path, int volume);
void play();
void play(int times);
void set_volume(int volume);
private:
std::unique_ptr<Mix_Chunk, void (*)(Mix_Chunk *)> chunk;
};
My main program (.cpp):
#include "sample.h"
#include <iostream>
sample::sample(const std::string &path, int volume) : chunk(Mix_LoadWAV(path.c_str()), Mix_FreeChunk) {
if (!chunk.get()) {
std::cout << "Could not load audio sample: " << path << std::endl;
}
Mix_VolumeChunk(chunk.get(), volume);
}
void sample::play() {
Mix_PlayChannel(-1, chunk.get(), 0);
}
void sample::play(int times) {
Mix_PlayChannel(-1, chunk.get(), times - 1);
}
void sample::set_volume(int volume) {
Mix_VolumeChunk(chunk.get(), volume);
}
int main() {
if (Mix_Init(MIX_INIT_FLAC | MIX_INIT_MP3 | MIX_INIT_OGG) < 0) {
return -1;
}
if (Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, 2, 1024) < 0) {
std::cout << "Can not initialize mixer!" << std::endl;
return -1;
}
// Amount of channels (Max amount of sounds playing at the same time)
Mix_AllocateChannels(32);
sample s("Snare-Drum-1.wav", MIX_MAX_VOLUME / 2);
s.play();
Mix_Quit();
return 0;
}

Your binary runs and finishes before phaffing about long enough to render the audio so solution is to make the code remain running longer ... I got your code working by adding this
s.play();
std::chrono::milliseconds timespan(2000); // or whatever
std::this_thread::sleep_for(timespan);
in your header replace
#include "SDL_mixer.h"
with
#include <SDL2/SDL_mixer.h>
#include <chrono>
#include <thread>
so now its compiled using SDL2 and not SDL
g++ -o sample sample.cpp -lSDL2 -lSDL2_mixer
So .... what is the real SDL2 solution ? well I would say a typical use case is SDL2 is part of a game which remains running hence the code base has an event loop which remains active long enough to hear the audio getting rendered. Another solution short of explicitly using a sleep or a gui is to put the code into a server ... SDL2 api itself must have their one-liner as well

Related

How to Run Code in a Loop Asynchronously without Stopping other Code C++

I was wondering how to run a loop in the background of a C++ program without stopping the main function, something like setInterval in JavaScript.
I don't really want to use any libraries for this, as I don't want to complicate the installation in the embedded machine.
This should give enough of an example to build from.
#include <thread>
#include <chrono>
void background(std::chrono::milliseconds interval) {
while (1) {
// do your task
std::this_thread::sleep_for(interval);
}
}
int main() {
auto interval = std::chrono::milliseconds(500);
std::thread background_worker(&background, interval);
// main work
background_worker.join();
}
EDIT: For those without std::thread on POSIX systems:
#include <pthread.h>
#include <unistd.h>
void *background(void *interval) {
unsigned int interval_ms = (*(unsigned int*)interval) * 1000;
while (1) {
// do your task
usleep(interval_ms);
}
}
int main() {
unsigned int interval = 500;
pthread_t background_worker;
pthread_create(&background_worker, NULL, background, (void*)&interval);
// main work
pthread_join(background_worker, NULL);
}

Handle threads between classes with C++, libvlc and ubuntu

I have an application in C++ with a GUI interface that needs to reproduce some mp3 depending on user's interactions. I need to reproduce the mp3 without blocking the program's flow.
In order to save code, I decided to write a class to handle the mp3 reproducing and to reproduce it in a new thread, but I'm having problems when I need to stop the playing.
I know that libvlc have already some locking function, but the flow of the program stops when the mp3 is playing.
The mp3 starts correctly, but if I try to call the stop_mp3() function, I get a core dumped error.
The error is generated when I call the stop function from the secondpanel.cpp.
// replay.h
#include <vlc/vlc.h>
class rePlay
{
public:
rePlay();
virtual ~rePlay();
void play_mp3(const char*);
void stop_mp3();
protected:
libvlc_instance_t *inst;
libvlc_media_player_t *mp;
libvlc_media_t *m;
private:
};
// rePlay.cpp
#include "rePlay.h"
#include <vlc/vlc.h>
#include <mutex>
std::mutex mp3_mutex;
rePlay::rePlay()
{
//ctor
}
rePlay::~rePlay()
{
//dtor
}
void rePlay::play_mp3(const char* path){
mp3_mutex.lock();
// load the vlc engine
inst = libvlc_new(0, NULL);
printf("apro il file %d\n", inst);
// create a new item
m = libvlc_media_new_path(inst, path);
// create a media play playing environment
mp = libvlc_media_player_new_from_media(m);
// no need to keep the media now
libvlc_media_release(m);
// play the media_player
libvlc_media_player_play(mp);
printf("Done.\n");
}
void rePlay::stop_mp3(){
mp3_mutex.unlock();
// stop playing
libvlc_media_player_stop(mp);
// free the media_player
libvlc_media_player_release(mp);
libvlc_release(inst);
}
// firstpanel.h
class firstpanel: public wxPanel
{
public:
firstpanel(wxWindow* parent, Isola02Frame*, wxWindowID id=wxID_ANY,const wxPoint& pos=wxDefaultPosition,const wxSize& size=wxDefaultSize);
virtual ~firstpanel();
void checkValue(wxCommandEvent& event);
void check_cf(wxTimerEvent& event);
rePlay *mp3_apertura_porta = new rePlay(); // <-- I DECLARED THE pointer here
//(*Declarations(firstpanel)
wxStaticText* assistenza;
wxStaticText* first_panel;
wxStaticText* identificazione;
wxTextCtrl* tessera;
//*)
...
}
// firstpanel.cpp
std::thread second = std::thread([this]() noexcept {
this->mp3_apertura_porta->play_mp3("/home/robodyne/Project/audio/scegli-rifiuto.mp3"); });
second.join();
// secondpanel.cpp
void secondpanel::OnBitmapButton2Click(wxCommandEvent& event)
{
firstpanel *ptr;
ptr->mp3_apertura_porta->stop_mp3();
}
EDIT1: Thanks to #Ted Lyngmo, I used the libvlcpp library which seems to be async somehow and it works fine. The only problem is that I do not know how to call mp.stopAsync() from stop_mp3() to stop the audio file because variable mp is not global.
#include "rePlay.h"
#include <vlc/vlc.h>
#include <mutex>
#include <unistd.h>
#include "vlcpp/vlc.hpp"
std::mutex mp3_mutex;
rePlay::rePlay()
{
//ctor
}
rePlay::~rePlay()
{
//dtor
}
void rePlay::play_mp3(const char* path){
auto instance = VLC::Instance(0, nullptr);
auto media = VLC::Media(instance, path, VLC::Media::FromPath);
auto mp = VLC::MediaPlayer(media);
auto mp.play();
#if LIBVLC_VERSION_INT >= LIBVLC_VERSION(4, 0, 0, 0)
#else
mp.stop();
#endif
}
void rePlay::stop_mp3(){
mp.stopAsync(); <-- variable mp is not global!
}
EDIT2:
I think the libvlcpp doesn't work well with GUI applications.
If I run it in a console application, I'm able to perform other operations in parallel, but when I execute it in the WxWidgets application, it blocks the flow.
This is the terminal console application:
#include "vlcpp/vlc.hpp"
#include <thread>
#include <iostream>
int main(int ac, char** av)
{
if (ac < 2)
{
std::cerr << "usage: " << av[0] << " <file to play>" << std::endl;
return 1;
}
auto instance = VLC::Instance(0, nullptr);
auto media = VLC::Media(instance, av[1], VLC::Media::FromPath);
auto mp = VLC::MediaPlayer(media);
mp.play();
for (int i=0; i < 10000000; i++){
printf("%d\n", i);
}
std::this_thread::sleep_for( std::chrono::seconds( 10 ) );
#if LIBVLC_VERSION_INT >= LIBVLC_VERSION(4, 0, 0, 0)
mp.stopAsync();
#else
mp.stop();
#endif
}
the for() cycle works in parallel while the mp3 is playing.
The same doesn't happen if I use it with my application.

Google Speech Recognition doesn't work because of colliding threads Qt C++

I'm using Google's Speech-To-Text API in my Qt C++ application.
Google's C++ documentation is helpful but to an extent.
In my code below, if I uncomment
std::this_thread::sleep_for(std::chrono::seconds(1));
The speech recognition is working, but not properly - it's skipping some words. But without this line, it doesn't work at all. I think that's because the while loop of MicrophoneThreadMain() collide with the while loop of start_speech_to_text(). But I'm not sure.
I want these two functions to run side-by-side simultaneously, without interruptions, and with no delays.
I tried to use QThreads and Signal and Slots but couldn’t make it work.
speech_to_text.cpp
#include "speechtotext.h"
using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;
SpeechToText::SpeechToText(QObject *parent) : QObject(parent)
{
}
void SpeechToText::initialize()
{
QAudioFormat qtFormat;
// Get default audio input device
QAudioDeviceInfo qtInfo = QAudioDeviceInfo::defaultInputDevice();
// Set the audio format settings
qtFormat.setCodec("audio/pcm");
qtFormat.setByteOrder(QAudioFormat::Endian::LittleEndian);
qtFormat.setChannelCount(1);
qtFormat.setSampleRate(16000);
qtFormat.setSampleSize(16);
qtFormat.setSampleType(QAudioFormat::SignedInt);
// Check whether the format is supported
if (!qtInfo.isFormatSupported(qtFormat)) {
qWarning() << "Default format is not supported";
exit(3);
}
// Instantiate QAudioInput with the settings
audioInput = new QAudioInput(qtFormat);
// Start receiving data from audio input
ioDevice = audioInput->start();
emit finished_initializing();
}
void SpeechToText::MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
StreamingRecognizeResponse> *streamer)
{
StreamingRecognizeRequest request;
std::size_t size_read;
while(true)
{
audioDataBuffer.append(ioDevice->readAll());
size_read = audioDataBuffer.size();
// And write the chunk to the stream.
request.set_audio_content(&audioDataBuffer.data()[0], size_read);
std::cout << "Sending " << size_read / 1024 << "k bytes." << std::endl;
streamer->Write(request);
//std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
void SpeechToText::start_speech_to_text()
{
StreamingRecognizeRequest request;
auto *streaming_config = request.mutable_streaming_config();
RecognitionConfig *recognition_config = new RecognitionConfig();
recognition_config->set_language_code("en-US");
recognition_config->set_sample_rate_hertz(16000);
recognition_config->set_encoding(RecognitionConfig::LINEAR16);
streaming_config->set_allocated_config(recognition_config);
// Create a Speech Stub connected to the speech service.
auto creds = grpc::GoogleDefaultCredentials();
auto channel = grpc::CreateChannel("speech.googleapis.com", creds);
std::unique_ptr<Speech::Stub> speech(Speech::NewStub(channel));
// Begin a stream.
grpc::ClientContext context;
auto streamer = speech->StreamingRecognize(&context);
// Write the first request, containing the config only.
streaming_config->set_interim_results(true);
streamer->Write(request);
// The microphone thread writes the audio content.
std::thread microphone_thread(&SpeechToText::MicrophoneThreadMain, this, streamer.get());
// Read responses.
StreamingRecognizeResponse response;
while (streamer->Read(&response)) // Returns false when no more to read.
{
// Dump the transcript of all the results.
for (int r = 0; r < response.results_size(); ++r)
{
auto result = response.results(r);
std::cout << "Result stability: " << result.stability() << std::endl;
for (int a = 0; a < result.alternatives_size(); ++a)
{
auto alternative = result.alternatives(a);
std::cout << alternative.confidence() << "\t"
<< alternative.transcript() << std::endl;
}
}
}
grpc::Status status = streamer->Finish();
microphone_thread.join();
if (!status.ok()) {
// Report the RPC failure.
qDebug() << "error RPC";
std::cerr << status.error_message() << std::endl;
}
}
speech_to_text.h
#ifndef SPEECHTOTEXT_H
#define SPEECHTOTEXT_H
#include <QObject>
#include <QDebug>
#include <QThread>
#include <thread>
#include <chrono>
#include <fstream>
#include <iostream>
#include <iterator>
#include <string>
#include <functional>
#include <QtMultimedia>
#include <QtMultimedia/QAudioInput>
#include <QAudioDeviceInfo>
#include <QAudioFormat>
#include <QIODevice>
#include <QtConcurrent>
#include <QMutex>
#include <grpc++/grpc++.h>
#include "google/cloud/speech/v1/cloud_speech.grpc.pb.h"
using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;
class SpeechToText : public QObject
{
Q_OBJECT
public:
explicit SpeechToText(QObject *parent = nullptr);
signals:
void finished_initializing();
void finished_speech_to_text(QString);
public slots:
void initialize();
void start_speech_to_text();
private:
void MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
StreamingRecognizeResponse> *);
QAudioInput *audioInput;
QIODevice *ioDevice;
QByteArray audioDataBuffer;
};
#endif // SPEECHTOTEXT_H
Any idea on how to solve this?
I post here the solution to my problem. Thanks #allquixotic for all the helpful information.
in mainwindow.cpp
void MainWindow::setUpMicrophoneRecorder()
{
microphone_thread = new QThread(this);
microphone_recorder_engine.moveToThread(microphone_thread);
connect(microphone_thread, SIGNAL(started()), &microphone_recorder_engine, SLOT(start_listen()));
connect(&microphone_recorder_engine, &MicrophoneRecorder::microphone_data_raw,
this, [this] (const QByteArray &data) {
this->speech_to_text_engine.listen(data);
});
microphone_thread->start();
}
void MainWindow::setUpSpeechToTextEngine()
{
speech_to_text_thread = new QThread(this);
speech_to_text_engine.moveToThread(speech_to_text_thread);
connect(speech_to_text_thread, SIGNAL(started()), &speech_to_text_engine, SLOT(initialize()));
connect(&speech_to_text_engine, SIGNAL(finished_speech_to_text(QString)), this, SLOT(process_user_input(QString)));
speech_to_text_thread->start();
}
microphonerecorder.h
#ifndef MICROPHONERECORDER_H
#define MICROPHONERECORDER_H
#include <QObject>
#include <QByteArray>
#include <QDebug>
#include <QtMultimedia>
#include <QtMultimedia/QAudioInput>
#include <QAudioDeviceInfo>
#include <QAudioFormat>
#include <QIODevice>
class MicrophoneRecorder : public QObject
{
Q_OBJECT
public:
explicit MicrophoneRecorder(QObject *parent = nullptr);
signals:
void microphone_data_raw(const QByteArray &);
public slots:
void start_listen();
private slots:
void listen(const QByteArray &);
private:
QAudioInput *audioInput;
QIODevice *ioDevice;
QByteArray audioDataBuffer;
};
#endif // MICROPHONERECORDER_H
microphonerecorder.cpp
#include "microphonerecorder.h"
MicrophoneRecorder::MicrophoneRecorder(QObject *parent) : QObject(parent)
{
}
void MicrophoneRecorder::listen(const QByteArray &audioData)
{
emit microphone_data_raw(audioData);
}
void MicrophoneRecorder::start_listen()
{
QAudioFormat qtFormat;
// Get default audio input device
QAudioDeviceInfo qtInfo = QAudioDeviceInfo::defaultInputDevice();
// Set the audio format settings
qtFormat.setCodec("audio/pcm");
qtFormat.setByteOrder(QAudioFormat::Endian::LittleEndian);
qtFormat.setChannelCount(1);
qtFormat.setSampleRate(16000);
qtFormat.setSampleSize(16);
qtFormat.setSampleType(QAudioFormat::SignedInt);
// Check whether the format is supported
if (!qtInfo.isFormatSupported(qtFormat)) {
qWarning() << "Default format is not supported";
exit(3);
}
// Instantiate QAudioInput with the settings
audioInput = new QAudioInput(qtFormat);
// Start receiving data from audio input
ioDevice = audioInput->start();
// Listen to the received data for wake words
QObject::connect(ioDevice, &QIODevice::readyRead, [=] {
listen(ioDevice->readAll());
});
}
speechtotext.h
#ifndef SPEECHTOTEXT_H
#define SPEECHTOTEXT_H
#include <QObject>
#include <QDebug>
#include <QThread>
#include <QDateTime>
#include <thread>
#include <chrono>
#include <string>
#include <QtMultimedia>
#include <QtMultimedia/QAudioInput>
#include <QAudioDeviceInfo>
#include <QAudioFormat>
#include <QIODevice>
#include <QtConcurrent>
#include <QMutex>
#include <grpc++/grpc++.h>
#include "google/cloud/speech/v1/cloud_speech.grpc.pb.h"
using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;
class SpeechToText : public QObject
{
Q_OBJECT
public:
explicit SpeechToText(QObject *parent = nullptr);
signals:
void finished_initializing();
void in_speech_to_text();
void out_of_speech_to_text();
void finished_speech_to_text(QString);
public slots:
void initialize();
void listen(const QByteArray &);
void start_speech_to_text();
private:
void MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
StreamingRecognizeResponse> *);
void StreamerThread(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
StreamingRecognizeResponse> *);
QByteArray audioDataBuffer;
int m_start_time;
};
#endif // SPEECHTOTEXT_H
speechtotext.cpp
#include "speechtotext.h"
using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;
SpeechToText::SpeechToText(QObject *parent) : QObject(parent)
{
}
void SpeechToText::initialize()
{
emit finished_initializing();
}
void SpeechToText::MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
StreamingRecognizeResponse> *streamer)
{
StreamingRecognizeRequest request;
std::size_t size_read;
while (time(0) - m_start_time <= TIME_RECOGNITION)
{
int chunk_size = 64 * 1024;
if (audioDataBuffer.size() >= chunk_size)
{
QByteArray bytes_read = QByteArray(audioDataBuffer);
size_read = std::size_t(bytes_read.size());
// And write the chunk to the stream.
request.set_audio_content(&bytes_read.data()[0], size_read);
bool ok = streamer->Write(request);
/*if (ok)
{
std::cout << "Sending " << size_read / 1024 << "k bytes." << std::endl;
}*/
audioDataBuffer.clear();
audioDataBuffer.resize(0);
}
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
qDebug() << "Out of speech recognition: " << end_date;
emit out_of_speech_to_text();
streamer->WritesDone();
}
void SpeechToText::StreamerThread(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
StreamingRecognizeResponse> *streamer)
{
// Read responses.
StreamingRecognizeResponse response;
while (time(0) - m_start_time <= TIME_RECOGNITION)
{
if(streamer->Read(&response)) // Returns false when no more to read.
{
// Dump the transcript of all the results.
if (response.results_size() > 0)
{
auto result = response.results(0);
if (result.alternatives_size() > 0)
{
auto alternative = result.alternatives(0);
auto transcript = QString::fromStdString(alternative.transcript());
if (result.is_final())
{
qDebug() << "Speech recognition: " << transcript;
emit finished_speech_to_text(transcript);
}
}
}
}
}
}
void SpeechToText::listen(const QByteArray &audioData)
{
audioDataBuffer.append(audioData);
}
void SpeechToText::start_speech_to_text()
{
qDebug() << "in start_speech_to_text: " << start_date;
emit in_speech_to_text();
m_start_time = time(0);
audioDataBuffer.clear();
audioDataBuffer.resize(0);
StreamingRecognizeRequest request;
auto *streaming_config = request.mutable_streaming_config();
RecognitionConfig *recognition_config = new RecognitionConfig();
recognition_config->set_language_code("en-US");
recognition_config->set_sample_rate_hertz(16000);
recognition_config->set_encoding(RecognitionConfig::LINEAR16);
streaming_config->set_allocated_config(recognition_config);
// Create a Speech Stub connected to the speech service.
auto creds = grpc::GoogleDefaultCredentials();
auto channel = grpc::CreateChannel("speech.googleapis.com", creds);
std::unique_ptr<Speech::Stub> speech(Speech::NewStub(channel));
// Begin a stream.
grpc::ClientContext context;
auto streamer = speech->StreamingRecognize(&context);
// Write the first request, containing the config only.
streaming_config->set_interim_results(true);
streamer->Write(request);
// The microphone thread writes the audio content.
std::thread microphone_thread(&SpeechToText::MicrophoneThreadMain, this, streamer.get());
std::thread streamer_thread(&SpeechToText::StreamerThread, this, streamer.get());
microphone_thread.join();
streamer_thread.join();
}
You should really follow Google's example and only do 64k at a time.
You should use WritesDone() on the streamer when you intend the request to be shipped to Google's server.
It appears that you aren't ever clearing out your QByteArray's data, so it will just pile up over time with each successive append call on your QByteArray. Since you're using a pointer to the first element of data in the underlying array, each time you run through your loop, you're sending the entire audio data that's been captured up to that point to streamer. I suggest a nested loop that calls QIODevice::read(char *data, qint64 maxSize) repeatedly until your QByteArray has exactly 64KB. You'll need to handle a return value of -1 indicating end of stream, and adjust maxSize downwards based on how much more data is needed to fill your array up to 64k. Requests to Google's API with too little data (e.g. just a couple of bytes as your current loop appears to do at first) may get you rate-limited, or create upstream congestion on the Internet connection due to the high protocol overhead to data ratio. Also it's probably easier to handle this with a plain C-style array of a fixed size (64k) rather than a QByteArray because you don't need resizing, and AFAIK QByteArray::clear() could cause memory allocation (not great for performance). To avoid re-sending old data on a short write (e.g. when the microphone stream closes before the 64k buffer is full), you should also memset(array, 0, sizeof array); after each ClientReaderWriterInterface::WritesDone() call.
If the network can't keep up with the incoming microphone data, you may end up with an overrun situation on the QAudioInput where it runs out of local buffer to store the audio. More buffering makes this less likely but also decreases responsiveness. You may want to just buffer all the data that comes off of the QAudioInput into an unbounded QByteArray and read out of that 64k at a time (you can do so by wrapping it in a QBuffer and all your code dealing with QIODevice in MicrophoneThreadMain() will be compatible.) I think, normally, for projects like yours, the user would prefer to have worse responsiveness, as opposed to having to repeat themselves, in case of a network related overrun. But there's probably a threshold - maybe 5 seconds or so - after which the buffered data might become "out of date" as the user may try speaking into the mic again, causing a weird effect of multiple STT events happening in rapid succession once the upstream bottleneck frees up.

Using boost library with Intel Pin

I am trying to use the Boost 1.60.0 library with Intel Pin 2.14-71313-msvc12-windows. The following piece of code is the simple implementation I did to try things out:
#define _CRT_SECURE_NO_WARNINGS
#include "pin.H"
#include <iostream>
#include <fstream>
#include <stdio.h>
#include <stdlib.h>
#include <sstream>
#include <time.h>
#include <boost/lockfree/spsc_queue.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
namespace boost_network{
#include <boost/asio.hpp>
#include <boost/array.hpp>
}
//Buffersize of lockfree queue to use
const int BUFFERSIZE = 1000;
//Tracefiles for error / debug purpose
std::ofstream TraceFile;
//String wrapper for boost queue
class statement {
public:
statement(){ s = ""; }
statement(const std::string &n) : s(n) {}
std::string s;
};
//string queue to store inserts
boost::lockfree::spsc_queue<statement, boost::lockfree::capacity<BUFFERSIZE>> buffer; // need lockfree queue for multithreading
//Pin Lock to synchronize buffer pushes between threads
PIN_LOCK lock;
KNOB<string> KnobOutputFile(KNOB_MODE_WRITEONCE, "pintool", "o", "calltrace.txt", "specify trace file name");
KNOB<BOOL> KnobPrintArgs(KNOB_MODE_WRITEONCE, "pintool", "a", "0", "print call arguments ");
INT32 Usage()
{
cerr << "This tool produces a call trace." << endl << endl;
cerr << KNOB_BASE::StringKnobSummary() << endl;
return -1;
}
VOID ImageLoad(IMG img, VOID *)
{
//save module informations
buffer.push(statement("" + IMG_Name(img) + "'; '" + IMG_Name(img).c_str() + "'; " + IMG_LowAddress(img) + ";"));
}
VOID Fini(INT32 code, VOID *v)
{
}
void do_somenetwork(std::string host, int port, std::string message)
{
boost_network::boost::asio::io_service ios;
boost_network::boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::address::from_string(host), port);
boost_network::boost::asio::ip::tcp::socket socket(ios);
socket.connect(endpoint);
boost_network::boost::system::error_code error;
socket.write_some(boost_network::boost::asio::buffer(message.data(), message.size()), error);
socket.close();
}
void WriteData(void * arg)
{
int popped; //actual amount of popped objects
const int pop_amount = 10000;
statement curr[pop_amount];
string statement = "";
while (1) {
//pop more objects from buffer
while (popped = buffer.pop(curr, pop_amount))
{
//got new statements in buffer to insert into db: clean up statement
statement.clear();
//concat into one statement
for (int i = 0; i < popped; i++){
statement += curr[i].s;
}
do_somenetwork(std::string("127.0.0.1"), 50000, sql_statement.c_str());
}
PIN_Sleep(1);
}
}
int main(int argc, char *argv[])
{
PIN_InitSymbols();
//write address of label to TraceFile
TraceFile.open(KnobOutputFile.Value().c_str());
TraceFile << &label << endl;
TraceFile.close();
// Initialize the lock
PIN_InitLock(&lock);
// Initialize pin
if (PIN_Init(argc, argv)) return Usage();
// Register ImageLoad to be called when an image is loaded
IMG_AddInstrumentFunction(ImageLoad, 0);
//Start writer thread
PIN_SpawnInternalThread(WriteData, 0, 0, 0);
PIN_AddFiniFunction(Fini, 0);
// Never returns
PIN_StartProgram();
return 0;
}
When I build the above code, Visual Studio cannot find boost_network::boost::asio::ip and keeps giving error saying asio::ip does not exist. I had previously posted this question myself:
Sending data from a boost asio client
and after using the provided solution in the same workspace, the code worked fine and I was able to communicate over the network. I am not sure what is going wrong here. For some reason using the different namespace seems to not work out because it says boost must be in the default namespace.
However, if I do not add the namespace, in that case the line,
KNOB<BOOL> KnobPrintArgs(KNOB_MODE_WRITEONCE, "pintool", "a", "0", "print call arguments ");
throws an error saying BOOL is ambiguous.
Kindly suggest what should be a viable solution in this situation. I am using Visual Studio 2013.
The same piece of code with only Pin also works with out the network part and I can write data generated from Pin into a flat file.

OpenCV camera capture from within a thread

This is a small part of the code I'm trying to get to work. This is also one of my first times working with C++. I'm used to higher-level languages, like Java or C#.
The main version is meant to be run as a shared object or DLL. The idea is that an external program (in C#) will start the main loops. The frames from the camera will be captured in a thread. Information will processed inside of that thread and copied to an array ("dataArray"). This copy process will be done while a class mutex is locked. Then, another function called externally will copy that saved array ("dataArray") to a second array ("outArray") and return a pointer to the second array. The external program will use the pointer to copy the data from the second Array, which will not be modified until the function is called again.
But for all that to work, I need the frames to constantly be captured. I realized that I needed something to keep my main function going, so I'm keeping an infinite loop in there. In the "real" version, the keepRunning variable will be changed by the external program running the library.
I was recently lectured on StackOverflow about not making global variables, so I'm keeping the one instance of my class in a static member. That's pretty standard in Java. I don't know if it's bad practice in C++. I was also taken by surprise as to how C++ threads start as soon as they're created, without an explicit "start" instructions. That's why I'm putting my only thread in a vector. That seems to be what most people recommend.
I understand that without keepRunning never being actually changed, the threads will never be joined, but I'll dear with that later. I'm running this on a Mac, but I'll need it to eventually run on Windows, Mac and Linux.
Here's my header:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <thread>
#include <vector>
using namespace cv;
using namespace std;
class MyCap {
public:
MyCap();
VideoCapture cap;
static MyCap * instance;
void run();
static void RunThreads(MyCap * cap);
bool keepRunning = true; // Will be changed by the external program.
vector<thread> capThreads;
private:
Mat frame;
};
And here's my code:
#include "theheader.h"
MyCap * MyCap::instance = NULL;
int main(int argc, char** argv) {
MyCap::instance = new MyCap();
MyCap::instance->capThreads.push_back(thread(MyCap::RunThreads, MyCap::instance));
// Outside loop.
while(MyCap::instance->keepRunning) {
}
for (int i = 0; i < MyCap::instance->capThreads.size(); i++) {
MyCap::instance->capThreads[i].join();
}
}
MyCap::MyCap() {
namedWindow("flow", 1);
cap.open(0);
}
void MyCap::RunThreads(MyCap * cap) {
cap->run();
}
void MyCap::run() {
// Inside loop.
while(keepRunning) {
cap >> frame;
imshow("flow", frame);
if (waitKey(30) >= 0) {
break;
}
}
}
With this code, I get a black screen. If I run cap.open(0) from within the run method, I don't even get that. I'm obviously doing something very wrong. But what really puzzles me is: why does it make a difference where that same code is called from? If I run what is now in run inside of main it will work. If I change the call of cap.open(0) from the constructor to run, that changes what the method does. Also the waitKey condition stops working from within the thread. What big thing am I missing?
Version 2
Based on the suggestions of #darien-pardibas, I made this second version:
Header:
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <thread>
#include <vector>
using namespace cv;
using namespace std;
class MyCap {
public:
MyCap();
void run();
bool keepRunning = true; // Will be changed by the external program.
static void RunThreads(MyCap * cap);
static vector<thread> capThreads;
static MyCap * getInstance();
private:
static MyCap * instance;
};
The main file:
#include "theprogram.h" // I'll admit that, even for a placeholder, it was a bad name.
MyCap * MyCap::instance = NULL;
vector<thread> MyCap::capThreads;
MyCap::MyCap() {
cout << "Instantiate" << endl;
}
MyCap * MyCap::getInstance() {
if (MyCap::instance == NULL) {
MyCap::instance = new MyCap;
}
return MyCap::instance;
}
void MyCap::RunThreads(MyCap * cap) {
cap->run();
}
void MyCap::run() {
cout << "Run" << endl;
namedWindow("flow", 1);
cout << "Window created." << endl;
VideoCapture cap(0); // HANGS HERE!
cout << "Camera open." << endl; // This never gets printed.
// Inside loop.
Mat frame;
while(keepRunning) {
cap >> frame;
imshow("flow", frame);
if (waitKey(30) >= 0) {
break;
}
}
}
int main(int argc, char** argv) {
MyCap::capThreads.push_back(thread(&MyCap::RunThreads, MyCap::getInstance()));
for (int i = 0; i < MyCap::capThreads.size(); i++) {
MyCap::capThreads[i].join();
}
}
This prints:
Instantiate
Run
Window created.
And hangs there.
But if I move the code from run to main and change keepRunning to true, then it works as expected. I think I'm missing something else, and I'm guessing it has something to do with how C++ works.
Okay, without looking at resolving all design patterns issues I can see in your code, I can confirm that the code below works. I think the main problem was that you needed to create the namedWindow in the same thread where you will be capturing the image and remove the while loop you had in your main method.
// "theheader.h"
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <thread>
#include <vector>
class MyCap {
public:
void run();
static void RunThreads(MyCap * cap);
bool keepRunning = true; // Will be changed by the external program.
std::vector<std::thread> capThreads;
private:
cv::Mat frame;
cv::VideoCapture cap;
MyCap() { }
static MyCap * s_instance;
public:
static MyCap *instance();
};
// "theheader.cpp"
#include "theheader.h"
#pragma comment(lib, "opencv_core248d")
#pragma comment(lib, "opencv_highgui248d")
using namespace std;
using namespace cv;
MyCap * MyCap::s_instance = NULL;
MyCap* MyCap::instance() {
if (s_instance == NULL)
s_instance = new MyCap();
return s_instance;
}
void MyCap::RunThreads(MyCap * cap) {
cap->run();
}
void MyCap::run() {
namedWindow("flow", 1);
cap.open(0);
// Inside loop.
while (keepRunning) {
cap >> frame;
imshow("flow", frame);
if (waitKey(30) >= 0) {
break;
}
}
}
int main(int argc, char** argv) {
MyCap::instance()->capThreads.push_back(thread(&MyCap::RunThreads, MyCap::instance()));
for (int i = 0; i < MyCap::instance()->capThreads.size(); i++) {
MyCap::instance()->capThreads[i].join();
}
}