I'm trying to send a video using a ZeroMQ infrastructure and I split the video into chunks to send it. When I do it and put the video into a vector to send it through zmq::send_multipart I get a very high usage of RAM memory and so times later I get a segmentation fault error.
My headache is that when I comment on the line that sends the multipart message and runs the program, the vector is made normally and I don't get the segmentation error, although the consumption of RAM memory is not so heavy.
Can someone give me a tip about how to send this file?
Server code:
#include <fstream>
#include <sstream>
#include <chrono>
#include <thread>
#include <iostream>
#include <future>
#include <zmq.hpp>
#include <zmq_addon.hpp>
using namespace std::chrono_literals;
const int size1MB = 1024 * 1024;
template <typename T>
void debug(T x)
{
std::cout << x << std::endl;
}
//Generate new chunks
std::unique_ptr<std::ofstream> createChunkFile(std::vector<std::string> &vecFilenames)
{
std::stringstream filename;
filename << "chunk" << vecFilenames.size() << ".mp4";
vecFilenames.push_back(filename.str());
return std::make_unique<std::ofstream>(filename.str(), std::ios::trunc);
}
//Split the file into chunks
void split(std::istream &inStream, int nMegaBytesPerChunk, std::vector<std::string> &vecFilenames)
{
std::unique_ptr<char[]> buffer(new char[size1MB]);
int nCurrentMegaBytes = 0;
std::unique_ptr<std::ostream> pOutStream = createChunkFile(vecFilenames);
while (!inStream.eof())
{
inStream.read(buffer.get(), size1MB);
pOutStream->write(buffer.get(), inStream.gcount());
++nCurrentMegaBytes;
if (nCurrentMegaBytes >= nMegaBytesPerChunk)
{
pOutStream = createChunkFile(vecFilenames);
nCurrentMegaBytes = 0;
}
}
}
int main()
{
zmq::context_t context(1);
zmq::socket_t socket(context, zmq::socket_type::rep);
socket.bind("tcp://*:5555");
std::ifstream img("video2.mp4", std::ifstream::in | std::ios::binary);
std::ifstream aux;
std::vector<std::string> vecFilenames;
std::vector<zmq::const_buffer> data;
std::ostringstream os;
std::async(std::launch::async, [&img, &vecFilenames]() {
split(img, 100, vecFilenames);
});
img.close();
zmq::message_t message, aux2;
socket.recv(message, zmq::recv_flags::none);
//Put the chunks into the vector
std::async([&data, &aux, &os, &vecFilenames]() {
for (int i = 0; i < vecFilenames.size(); i++)
{
std::async([&aux, &i]() {
aux.open("chunk" + std::to_string(i) + ".mp4", std::ifstream::in | std::ios::binary);
});
os << aux.rdbuf();
data.push_back(zmq::buffer(os.str()));
os.clear();
aux.close();
}
});
//Send the vector for the client
std::async([&socket, &data] {
zmq::send_multipart(socket, data);
});
}
Client-side:
#include <fstream>
#include <sstream>
#include <iostream>
#include <thread>
#include <chrono>
#include <string>
#include <zmq.hpp>
#include <zmq_addon.hpp>
#include <queue>
#include <deque>
#include <future>
#include <vector>
using namespace std::chrono_literals;
template <typename T>
void debug(T x)
{
std::cout << x << std::endl;
}
int main()
{
zmq::context_t context(1);
zmq::socket_t socket(context, zmq::socket_type::req);
socket.connect("tcp://localhost:5555");
std::ofstream img("teste.mp4", std::ios::out | std::ios::binary);
socket.send(zmq::buffer("ok\n"), zmq::send_flags::none);
std::vector<zmq::message_t> send_msgs;
zmq::message_t size;
std::async([&send_msgs, &img, &socket] {
zmq::recv_multipart(socket, std::back_inserter(send_msgs));
while (send_msgs.size())
{
img << send_msgs[0].to_string();
send_msgs.erase(send_msgs.begin());
}
});
}
An attempt to move all data via multipart-message does collect all data into one immense BLOB, plus add duplicate O/S-level transport-class specific buffers and the most probable result is a crash.
Send individual blocks of the video-BLOB as individual simple-message payloads and reconstruct the BLOB ( best via indexed numbering, having an option to re-request any part, that did not arrive to the receiver-side ).
Using std::async mode with a REQ/REP seems to be a bit tricky for this Archetype must keep its dFSA interleaved sequence of .recv()-.send()-.recv()-.send()-...ad infinitum... as it falls into an unsalvagable mutual deadlock if failed to do so.
For streaming video (like for CV / scene-detection), there are more tricks to put in - one of which is to use ZMQ_CONFLATE option, so as to send but the very recent video-frame, not losing time on "archaic" scene-images, that are already part of the history, and thus delivering always but the very recent video-frame to the receiving-side processing.
Related
I am currently trying to implement a file-transfer app under linux using boost.asio. I am complete new to this topic (general learning cpp), the past days I was trying to figure out how this might work. I am already losing my mind.
I made some progress, but I can't transfer a file completely, instead I am just getting a part of the file. Does anyone knows why the buffer is not red or written completely?
I made It really simple, its just a series of commands, I will implement it object oriented later on.
The secondly I was wondering if there is another way to map the file in memory more efficiency? Say someone want to transfer a 2 tb file?
I am using this binary file for testing: blah.bin
to successfully build it u need:
g++ -std=c++17 -Wall -Wextra -g -Iinclude -Llib src/main.cpp -o bin/main -lboost_system -lpthread
server
//server
#include <boost/asio.hpp>
#include <iostream>
#include <fstream>
using namespace boost::asio;
using ip::tcp;
using std::string;
using std::cout;
using std::endl;
int main() {
boost::asio::io_service io_service;
//listen
tcp::acceptor acceptor_(io_service, tcp::endpoint(tcp::v4(), 3333));
//socket
tcp::socket socket_(io_service);
//waiting
acceptor_.accept(socket_);
//read
boost::asio::streambuf buf;
boost::asio::read_until(socket_, buf, "\nend\n");
auto data = boost::asio::buffer_cast<const char*>(buf.data());
std::ofstream file("transferd.bin");
cout << data;
file << data;
file.close();
//response
boost::asio::write(socket_, boost::asio::buffer("data recived"));
return 0;
}
client
//client
#include <boost/asio.hpp>
#include <iostream>
#include <fstream>
#include <vector>
using namespace boost::asio;
using ip::tcp;
using std::string;
using std::cout;
using std::endl;
using std::vector;
const vector<char> fileVec(const std::string & fileName) {
std::ifstream file(fileName, std::ios::in | std::ios::binary);
vector<char> tempVec ((std::istreambuf_iterator<char>(file)), std::istreambuf_iterator<char>());
file.close();
return tempVec;
};
int main() {
boost::asio::io_service io_service;
//socket
tcp::socket socket(io_service);
//connection
socket.connect(tcp::endpoint(boost::asio::ip::address::from_string("127.0.0.1"), 3333));
//write to server
auto vdata = fileVec("example.bin");
vdata.push_back('\n');
vdata.push_back('e');
vdata.push_back('n');
vdata.push_back('d');
vdata.push_back('\n');
boost::system::error_code error;
boost::asio::write(socket, boost::asio::buffer(vdata), error);
//response from server
boost::asio::streambuf receive_buffer;
boost::asio::read(socket, receive_buffer, boost::asio::transfer_all(), error);
const char* response = boost::asio::buffer_cast<const char*>(receive_buffer.data());
cout << response;
return 0;
}
The problem is not in the socket but how you are writing the file in the server.
std::ofstream file("transferd.bin");
cout << data; // you cannot print binary data like this on the standard output!
file << data;
file.close();
The above snippet is wrong because the << operator is used for ASCII not for binary data!
A simple fix would be to replace it with the following snippet:
std::ofstream file("transferd.bin");
file.write(data, buf.size());
The second part of the question is of course more hard and it requires a lot of code changing.
The point is that you cannot transfer all the content at once, But you should split the transfer in small chunks.
One solution can be to send a small header with some information like the total transfer bytes so the server can read chunk by chunk until the whole transfer is complete.
The message has a header file containing the total message size, the number of chunks. Each chunks have a little header indicating the chunk size or for instance the chunk index in case you wanna switch to UDP.
Following the server snippet
#include <array>
#include <boost/asio.hpp>
#include <cstddef>
#include <fstream>
#include <iostream>
#include <vector>
using namespace boost::asio;
using ip::tcp;
using std::cout;
using std::endl;
using std::string;
struct MessageHeader {
int64_t totalSize;
int64_t chunkCount;
};
struct ChunkHeader {
int64_t index;
int64_t size;
};
MessageHeader parseHeader(const char* data) {
MessageHeader header;
memcpy(&header, data, sizeof(MessageHeader));
return header;
}
ChunkHeader parseChunkHeader(const char* data) {
ChunkHeader header;
memcpy(&header, data, sizeof(MessageHeader));
return header;
}
MessageHeader readHeader(tcp::socket& socket) {
std::array<char, sizeof(MessageHeader)> buffer;
boost::asio::read(socket, boost::asio::buffer(buffer));
return parseHeader(buffer.data());
}
ChunkHeader readChunkHeader(tcp::socket& socket) {
std::array<char, sizeof(ChunkHeader)> buffer;
boost::asio::read(socket, boost::asio::buffer(buffer));
return parseChunkHeader(buffer.data());
}
std::vector<char> readChunkMessage(tcp::socket& socket) {
auto chunkHeader = readChunkHeader(socket);
std::vector<char> chunk(chunkHeader.size);
boost::asio::read(socket, boost::asio::buffer(chunk));
return chunk;
}
int main() {
boost::asio::io_service io_service;
// listen
tcp::acceptor acceptor_(io_service, tcp::endpoint(tcp::v4(), 3333));
// socket
tcp::socket socket_(io_service);
// waiting
acceptor_.accept(socket_);
auto messageHeader = readHeader(socket_);
for (auto chunkIndex = 0ll; chunkIndex != messageHeader.chunkCount; ++chunkIndex) {
auto chunk = readChunkMessage(socket_);
// open the file in append mode
std::ofstream file("transferd.bin", std::ofstream::app);
file.write(chunk.data(), chunk.size());
}
// response
boost::asio::write(socket_, boost::asio::buffer("data recived"));
return 0;
}
The above solution has drawbacks because everything is synchronous and if the client quit in the middle of transfer the server will be stuck :D
A better solution is to turn that in async code... but It's too much all at once for a beginner!
I'm toying with ZeroMQ and Cereal to pass data structures (mostly std::vector of numeric types) between different processes. I've managed to successfully achieve what I wanted, but I'm getting a Segmentation Fault at the end of execution, and after further inspection with valgrind I've noticed that memory is being leaked/not freed.
server.cpp (receiving side):
#include <zmq.hpp>
#include <string>
#include <unistd.h>
#include <sstream>
#include <cereal/archives/binary.hpp> // serializer
#include <cereal/types/vector.hpp> // to allow vector serialization
///////////////////////
int receiveMSG(zmq::socket_t& _ss, std::string& _dd){
_dd.clear(); // empty string
zmq::message_t msg;
int n = _ss.recv(&msg);
char * tmp;
memcpy(tmp,msg.data(),msg.size());
_dd = std::string (tmp,msg.size());
return n;
};
///////////////////////
template <typename _t>
void vectorDeserializer(std::vector<_t>& _output, std::string& _serializedVector){
std::stringstream ss;
ss << _serializedVector;
cereal::BinaryInputArchive iarchive(ss);
iarchive(_output);
ss.clear();
};
/////////////////////////////////////////////////////
int main () {
zmq::context_t context (1);
zmq::socket_t socket (context, ZMQ_REP);
socket.bind ("tcp://*:4455");
std::string ts = "dummy";
std::vector<float> vec (10,5.5);
receiveMSG(socket,ts);
vectorDeserializer(vec,ts);
for (int i = 0; i < vec.size(); ++i) printf("%f\t",vec[i]);
printf("\n");
return 0;
}
client.cpp (sending side):
#include <zmq.hpp>
#include <string>
#include <vector>
#include <cereal/archives/binary.hpp> // serializer
#include <cereal/types/vector.hpp> // to allow vector serialization
#include <sstream>
///////////////////////
int sendMSG(zmq::socket_t& _ss, std::string& _dd){
zmq::message_t msg (_dd.size());
return _ss.send(msg);
};
///////////////////////
template <typename _t>
void vectorSerializer(std::vector<_t>& _input, std::string& _serializedVector){
std::stringstream ss; // any stream can be used
cereal::BinaryOutputArchive oarchive(ss); // Create an output archive
oarchive(_input);
_serializedVector = ss.str();
};
///////////////////////
int main () {
zmq::context_t context (1);
zmq::socket_t socket (context, ZMQ_REQ);
std::cout << "Connecting to server…" << std::endl;
socket.connect ("tcp://localhost:4455");
std::vector<float> tt (5,1.5);
std::string ssss="dummy";
vectorSerializer(tt,ssss);
sendMSG(socket,ssss);
return 0;
}
The output from valgrind is on this Pastebin link. Apparently the destructor of zmq::socket_t trhoughs a segfault due to an Invalid read of size 4 when closing the socket. Additionally, there is a lot of Conditional jump or move depends on uninitialised value(s) reported by valgrind when calling memcpy.
What exactly am I missing in my code? Or is the issue on the libraries inner code?
I'm sorry if this question is too simple for you, but i don't have good programming skills and ROS knowledge. I have a ROS topic in which are published some numbers that are heart beat intervals in seconds. I need to subscribe to that topic and do this kind of elaboration: The idea is to have a little array of ten numbers in which i can store continuously ten heart beat. Then i have a bigger array of 60 numbers that must go up by ten position in order to have at the bottom the newest ten values of the small array and it has to "throw away" the ten oldest values ( i did a bit of research and maybe i have to use a vector instead of an array because in C++ array are fixed as far as i read ). Then i have to print every time these 60 values in a text file (i mean in a loop, so the the text file will be continuously overwritten). Moreover, i see that ROS outputs the data from a topic in this manner: data: 0.987 with every data divided from the others by --- in a column. What i really want, because i need it for a script that reads text file in this manner, is a text file in which the values are in one column without spaces and other signs or words, like this:
0.404
0.952
0.956
0.940
0.960
I provide below the code for my node, in which, for now, i did only the subscribing part, since i have no idea on how to do the things that i have to do later. Thank you in advance for your help!!!
Code:
#include "ros/ros.h"
#include "std_msgs/String.h"
#include "../include/heart_rate_monitor/wfdb.h"
#include <stdio.h>
#include <sstream>
#include <iostream>
#include <fstream>
#include <iomanip>
#include <algorithm>
#include <vector>
int main(int argc, char **argv)
{
ros::init(argc, argv, "writer");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/HeartRateInterval", 1000);
ros::spin();
return 0;
}
NOTE: I didn't include the Float32/64 header because i publish the heart beats as a string. I don't know if this is of interest.
EDIT: I will include below the code of the publisher node which publish on the ROS topic the data.
#include "ros/ros.h"
#include "std_msgs/String.h"
#include "../include/heart_rate_monitor/wfdb.h"
#include <stdio.h>
#include <sstream>
#include <iostream>
#include <fstream>
#include <iomanip>
using namespace std;
int main(int argc, char **argv)
{
ros::init(argc, argv, "heart_rate_monitor");
ros::NodeHandle n;
ros::Publisher pub = n.advertise<std_msgs::String>("/HeartRateInterval", 1000);
ros::Rate loop_rate(1);
while (ros::ok())
{
ifstream inputFile("/home/marco/Scrivania/marks.txt");
string line;
while (getline(inputFile, line)) {
istringstream ss(line);
string heart;
ss >> heart;
std_msgs::String msg;
msg.data = ss.str();
pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
}
}
return 0;
}
Since what is published is the "variable" msg, i tried to replace in the code given as an answer the variable string_msg with msg, but nothing has changed. Thank you!
I'm not sure I understood exactly what you want but here is a brief example which might do what you need.
I'm using here an std::deque to have a circular buffer of 60 values. What you are missing in your code is a callback function process_message which is called for the subscriber every time a new message arrives.
I did not compile this code, so it may not compile right away but the basics are there.
#include <ros/ros.h>
#include <std_msgs/String.h>
#include "../include/heart_rate_monitor/wfdb.h"
#include <stdio.h>
#include <sstream>
#include <iostream>
#include <fstream>
#include <iomanip>
#include <algorithm>
#include <deque>
static std::deque<std::string> queue_buffer;
static int entries_added_since_last_write = 0;
void write_data_to_file()
{
// open file
std::ofstream data_file("my_data_file.txt");
if (data_file.is_open())
{
for (int i = 0; i < queue_buffer.size(); ++i)
{
data_file << queue_buffer[i] << std::end;
}
}
else
{
std::cout << "Error - Cannot open file." << std::endl;
exit(1);
}
data_file.close();
}
void process_message(const std_msgs::String::ConstPtr& string_msg)
{
// if buffer has already 60 entries, throw away the oldest one
if (queue_buffer.size() == 60)
{
queue_buffer.pop_front();
}
// add the new data at the end
queue_buffer.push_back(string_msg.data);
// check if 10 elements have been added and write to file if so
entries_added_since_last_write++;
if (entries_added_since_last_write == 10
&& queue_buffer.size() == 60)
{
// write data to file and reset counter
write_data_to_file();
entries_added_since_last_write = 0;
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "writer");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/HeartRateInterval", 1000, process_message);
ros::spin();
return 0;
}
I am trying to use the Boost 1.60.0 library with Intel Pin 2.14-71313-msvc12-windows. The following piece of code is the simple implementation I did to try things out:
#define _CRT_SECURE_NO_WARNINGS
#include "pin.H"
#include <iostream>
#include <fstream>
#include <stdio.h>
#include <stdlib.h>
#include <sstream>
#include <time.h>
#include <boost/lockfree/spsc_queue.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
namespace boost_network{
#include <boost/asio.hpp>
#include <boost/array.hpp>
}
//Buffersize of lockfree queue to use
const int BUFFERSIZE = 1000;
//Tracefiles for error / debug purpose
std::ofstream TraceFile;
//String wrapper for boost queue
class statement {
public:
statement(){ s = ""; }
statement(const std::string &n) : s(n) {}
std::string s;
};
//string queue to store inserts
boost::lockfree::spsc_queue<statement, boost::lockfree::capacity<BUFFERSIZE>> buffer; // need lockfree queue for multithreading
//Pin Lock to synchronize buffer pushes between threads
PIN_LOCK lock;
KNOB<string> KnobOutputFile(KNOB_MODE_WRITEONCE, "pintool", "o", "calltrace.txt", "specify trace file name");
KNOB<BOOL> KnobPrintArgs(KNOB_MODE_WRITEONCE, "pintool", "a", "0", "print call arguments ");
INT32 Usage()
{
cerr << "This tool produces a call trace." << endl << endl;
cerr << KNOB_BASE::StringKnobSummary() << endl;
return -1;
}
VOID ImageLoad(IMG img, VOID *)
{
//save module informations
buffer.push(statement("" + IMG_Name(img) + "'; '" + IMG_Name(img).c_str() + "'; " + IMG_LowAddress(img) + ";"));
}
VOID Fini(INT32 code, VOID *v)
{
}
void do_somenetwork(std::string host, int port, std::string message)
{
boost_network::boost::asio::io_service ios;
boost_network::boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::address::from_string(host), port);
boost_network::boost::asio::ip::tcp::socket socket(ios);
socket.connect(endpoint);
boost_network::boost::system::error_code error;
socket.write_some(boost_network::boost::asio::buffer(message.data(), message.size()), error);
socket.close();
}
void WriteData(void * arg)
{
int popped; //actual amount of popped objects
const int pop_amount = 10000;
statement curr[pop_amount];
string statement = "";
while (1) {
//pop more objects from buffer
while (popped = buffer.pop(curr, pop_amount))
{
//got new statements in buffer to insert into db: clean up statement
statement.clear();
//concat into one statement
for (int i = 0; i < popped; i++){
statement += curr[i].s;
}
do_somenetwork(std::string("127.0.0.1"), 50000, sql_statement.c_str());
}
PIN_Sleep(1);
}
}
int main(int argc, char *argv[])
{
PIN_InitSymbols();
//write address of label to TraceFile
TraceFile.open(KnobOutputFile.Value().c_str());
TraceFile << &label << endl;
TraceFile.close();
// Initialize the lock
PIN_InitLock(&lock);
// Initialize pin
if (PIN_Init(argc, argv)) return Usage();
// Register ImageLoad to be called when an image is loaded
IMG_AddInstrumentFunction(ImageLoad, 0);
//Start writer thread
PIN_SpawnInternalThread(WriteData, 0, 0, 0);
PIN_AddFiniFunction(Fini, 0);
// Never returns
PIN_StartProgram();
return 0;
}
When I build the above code, Visual Studio cannot find boost_network::boost::asio::ip and keeps giving error saying asio::ip does not exist. I had previously posted this question myself:
Sending data from a boost asio client
and after using the provided solution in the same workspace, the code worked fine and I was able to communicate over the network. I am not sure what is going wrong here. For some reason using the different namespace seems to not work out because it says boost must be in the default namespace.
However, if I do not add the namespace, in that case the line,
KNOB<BOOL> KnobPrintArgs(KNOB_MODE_WRITEONCE, "pintool", "a", "0", "print call arguments ");
throws an error saying BOOL is ambiguous.
Kindly suggest what should be a viable solution in this situation. I am using Visual Studio 2013.
The same piece of code with only Pin also works with out the network part and I can write data generated from Pin into a flat file.
Hi I am new to usage of Poco , can you please help me to find a way to get the index/position during the writing into deflating stream so that I can truncate the invalid data and make sure my file contains only valid data.
#include <stdexcept>
#include <stdarg.h>
#include <map>
#include <iostream>
#include <cstring>
#include <fstream>
#include <Poco/DeflatingStream.h>
#include <stdio.h>
#include <limits>
#include <stdio.h>
#include <unistd.h>
using namespace std;
std::ofstream* ostr;
Poco::DeflatingOutputStream* ofstr;
string fileName="/home/lamb/Cpp/simple.gzip";
bool written = false;
// int lastsucessfulwrite;
compress(){
*ofstr << "\t<xyz>\n";
*ofstr << "\t</xyz>\n";
*ofstr << " who=\"";
*ofstr << "/>\n";
written = true;
/* "lastsucessfulwrite" How to store the index of ofstr , in case of normal files we use ftell but in DeflatingOutputStream how to get index so that I can erase it later based on this value */
}
timer(){
sleep(2);
// 2 second
written = false ;
}
close(){
ofstr->close();
delete ofstr;
ofstr = NULL;
ostr->close();
delete ostr;
ostr = NULL;
}
int main(){
ostr = new std::ofstream;
ostr->exceptions(std::ofstream::failbit|std::ofstream::badbit);
ostr->open(_fileName.c_str(), std::ios::binary | std::ios::app);
ofstr = new Poco::DeflatingOutputStream(*_ostr,
Poco::DeflatingStreamBuf::STREAM_GZIP);
ofstr->precision(std::numeric_limits<double>::digits10);
string data1 = "hello';
string data2 = "hello';
string data3 = "hello';
written = false ;
timer()//start
compress(data1);
if(written)
{
compress(data2);
}
if(written)
{
compress(data2);
}
if(written)
{
compress(data3);// timeup and time() is inovked and part of compress() is executed
}
// Now I would like to use lastsucessfulwrite as the key and truncate the paritally witten data3
// In case of normal file we use "truncate" system call
close();
}
You can use any standard C++ stream functions with Poco streams.
streampos pos = ofstr->tellp()