Reading stdin in c ++ without using getline - c++

I'm trying to convert a program (it's a bridge between vscode and a debug)
This program is written in C#.
It was based on the o vscode-mono-debug
(https://github.com/Microsoft/vscode-mono-debug/blob/master/src/Protocol.cs)
Well,
In C # I can read the standard input as a stream:
byte[] buffer = new byte[BUFFER_SIZE];
Stream inputStream = Console.OpenStandardInput();
_rawData = new ByteBuffer();
while (!_stopRequested) {
var read = await inputStream.ReadAsync(buffer, 0, buffer.Length);
if (read == 0) {
// end of stream
break;
}
if (read > 0) {
_rawData.Append(buffer, read);
ProcessData();
}
}
I try this :
#define _WIN32_WINNT 0x05017
#define BUFFER_SIZE 4096
#include<iostream>
#include<thread>
#include <sstream>
using namespace std;
class ProtocolServer
{
private:
bool _stopRequested;
ostringstream _rawData;
public:
void Start()
{
char buffer[BUFFER_SIZE];
while (!cin.eof())
{
cin.getline(buffer,BUFFER_SIZE);
if (cin.fail())
{
//error
break;
}
else
{
_rawData << buffer;
}
}
}
};
int main()
{
ProtocolServer *server = new ProtocolServer();
server->Start();
return 0;
}
Input:
Content-Length: 261\r\n\r\n{\"command\":\"initialize\",\"arguments\":{\"clientID\":\"vscode\",\"adapterID\":\"advpl\",\"pathFormat\":\"path\",\"linesStartAt1\":true,\"columnsStartAt1\":true,\"supportsVariableType\":true,\"supportsVariablePaging\":true,\"supportsRunInTerminalRequest\":true},\"type\":\"request\",\"seq\":1}
This reads the first 2 lines correctly. Since the protocol does not put \n at the end, it gets stuck in cin.getline in the 3 interaction.
Switching to read() causes it to stay stopped at cin.read (), and does not read anything at all.
I found some similar questions:
StackOverFlow Question
And examples:
Posix_chat_client
But I do not need it to be necessarily asynchronous, but it works on windows and linux.
I'm sorry for my English
Thanks!

What you want is known as unformatted input operations.
Here's a 1:1 translation using just std::iostream. The only "trick" is using and honouring gcount():
std::vector<char> buffer(BUFFER_SIZE);
auto& inputStream = std::cin;
_rawData = std::string {}; // or _rawData.clear(), e.g.
while (!_stopRequested) {
inputStream.read(buffer.data(), buffer.size());
auto read = inputStream.gcount();
if (read == 0) {
// end of stream
break;
}
if (read > 0) {
_rawData.append(buffer.begin(), buffer.begin() + read);
ProcessData();
}
}
I'd personally suggest dropping that read == 0 check in favour of the more accurate:
if (inputStream.eof()) { break; } // end of stream
if (!inputStream.good()) { break; } // failure
Note that !good() also catches eof(), so you can
if (!inputStream.good()) { break; } // failure or end of stream
Live Demo
Live On Coliru
#include <iostream>
#include <vector>
#include <atomic>
struct Foo {
void bar() {
std::vector<char> buffer(BUFFER_SIZE);
auto& inputStream = std::cin;
_rawData = std::string {};
while (!_stopRequested) {
inputStream.read(buffer.data(), buffer.size());
auto read = inputStream.gcount();
if (read > 0) {
_rawData.append(buffer.begin(), buffer.begin() + read);
ProcessData();
}
if (!inputStream.good()) { break; } // failure or end of stream
}
}
protected:
void ProcessData() {
//std::cout << "got " << _rawData.size() << " bytes: \n-----\n" << _rawData << "\n-----\n";
std::cout << "got " << _rawData.size() << " bytes\n";
_rawData.clear();
}
static constexpr size_t BUFFER_SIZE = 128;
std::atomic_bool _stopRequested { false };
std::string _rawData;
};
int main() {
Foo foo;
foo.bar();
}
Prints (e.g. when reading its own source file):
got 128 bytes
got 128 bytes
got 128 bytes
got 128 bytes
got 128 bytes
got 128 bytes
got 128 bytes
got 92 bytes

Related

Using mmap memory for a circular buffer with very low overhead

I have a debugging tool which in order to register its acquired data uses a data structure called DiskPool (code follows). At start, this data structure mmaps a certain amount of data (backed by a file on disk). Clients can allocate memory via a simple bump pointer mechanism (implemented using std::atomic<size_t>.
As the volume of acquired data is massive I have decided to have a window over a time period instead of registering and keeping all the data. To fulfil such a purpose I have to change the disk pool into a circular buffer but this should not impose a considerable overhead as this overhead affects the measurement.
I wanted to ask you if anybody has any idea? (For example, using an atomic interface of STL).
#include <sys/mman.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/stat.h>
#include <atomic>
#include <memory>
#include <signal.h>
#include <chrono>
#include <thread>
#define handle_error(msg) \
do { perror(msg); exit(EXIT_FAILURE); } while (0)
class DiskPool {
char* addr_; // Initialized by mmap()
size_t len_; // Given by the user as many as memory pages as needed
std::atomic<size_t> top_; // Offset from address_
int fd_;
public:
DiskPool(size_t l, const char* file) : len_(l), top_(0),fd_(-1)
{
struct stat st;
fd_= open(file, O_CREAT|O_RDWR, S_IREAD | S_IWRITE);
if (fd_ == -1)
handle_error("open");
if (ftruncate(fd_, len_* sysconf(_SC_PAGE_SIZE)) != 0)
handle_error("ftruncate() error");
else {
fstat(fd_, &st);
printf("the file has %ld bytes\n", (long) st.st_size);
}
addr_ = static_cast<char*>( mmap(NULL, (len_* sysconf(_SC_PAGE_SIZE)),
PROT_READ | PROT_WRITE, MAP_SHARED|MAP_NORESERVE, fd_,0));
if (addr_ == MAP_FAILED)
handle_error("mmap failed.");
}
~DiskPool()
{
close(fd_);
if( munmap(addr_, len_)< 0) {
handle_error("Could not unmap file");
exit(1);}
std::cout << "Successfully unmapped the file. " << std::endl;
}
void* allocate(size_t s)
{
size_t t = std::atomic_fetch_add(&top_, s);
return addr_+t;
}
void flush() {madvise(addr_, len_, MADV_DONTNEED);}
};
As an example, I created sample code that uses this disk pool to record data at the creation and destruction of an object (AutomaticLifetimeCollector).
static const std::string RECORD_FILE = "Data.txt";
static const size_t DISK_POOL_NUMBER_OF_PAGES = 10000;
static std::shared_ptr<DiskPool> diskPool =
std::shared_ptr <DiskPool> (new DiskPool(DISK_POOL_NUMBER_OF_PAGES,RECORD_FILE.c_str()));
struct TaskRecord
{
uint64_t tid; // Thread id
uint64_t tag; // User-given identifier (“f1”)
uint64_t start_time; // nanoseconds
uint64_t stop_time;
uint64_t cpu_time;
TaskRecord(int depth, size_t tag, uint64_t start_time) :
tid(pthread_self()), tag(tag),
start_time(start_time), stop_time(0), cpu_time(0) {}
};
class AutomaticLifetimeCollector
{
TaskRecord* record_;
public:
AutomaticLifetimeCollector(size_t tag) :
record_(new(diskPool->allocate(sizeof(TaskRecord)))
TaskRecord(2, tag, (uint64_t)1000000004L))
{
}
~AutomaticLifetimeCollector() {
record_->stop_time = (uint64_t)1000000000L;
record_->cpu_time = (uint64_t)1000000002L;
}
};
inline void DelayMilSec(unsigned int pduration)
{
std::this_thread::sleep_until(std::chrono::system_clock::now() +
std::chrono::milliseconds(pduration));
}
std::atomic<bool> LoopsRunFlag {true};
void sigIntHappened(int signal)
{
std::cout<< "Application was terminated.";
LoopsRunFlag.store(false, std::memory_order_release);
}
int main()
{
signal(SIGINT, sigIntHappened);
unsigned int i = 0;
while(LoopsRunFlag)
{
AutomaticLifetimeCollector alc(i++);
DelayMilSec(2);
}
diskPool->flush();
return(0);
}
So accounting only for the handing out of variable-sized slices for a variable buffer, I believe a Compare-And-Swap loop should work.
The basic idea here is to read a value (which is atomic), do some computation with it, then write the value, if it did not change since reading. If it did change (another thread/process), the computation must be redone with the new value.
Since you have variable sized objects, I think actually simply slicing it into n array elements with (i + 1) % n won't work, as given (i + item_len) % capacity, it would split the allocation between the end and start of the buffer, and while that can be correct and working, I think maybe not what you wanted. So that means a condition, but I think the CPU should predict it pretty well.
#include <iostream>
#include <atomic>
std::atomic<size_t> next_index = 0;
const size_t len = 100; // small for demo purpose
size_t alloc(size_t required_size)
{
if (required_size > len) std::terminate(); // do something, would cause a buffer overflow
size_t i, ret_index, new_index;
i = next_index.load();
do
{
auto space = len - i;
ret_index = required_size <= space ? i : 0; // Wrap if needed
new_index = ret_index + required_size;
} while (next_index.compare_exchange_weak(i, new_index)); // succeed if value did of i not change
return ret_index;
}
int main()
{
std::cout << alloc(4) << std::endl; // 0 - 3
std::cout << alloc(8) << std::endl; // 4 - 11
std::cout << alloc(32) << std::endl; // 12 - 43
std::cout << alloc(32) << std::endl; // 44 - 75
std::cout << alloc(32) << std::endl; // 0 - 31 (76 - 107 would overflow)
std::cout << alloc(32) << std::endl; // 32 - 63
std::cout << alloc(32) << std::endl; // 64 - 95
std::cout << alloc(32) << std::endl; // 0 - 31 (96 - 127 would overflow)
}
Which should be fairly simple to plug in to your class:
void* allocate(size_t s)
{
if (s > len_ * sysconf(_SC_PAGE_SIZE)) std::terminate(); // do something, would cause a buffer overflow
size_t i, ret_index, new_index;
i = top_.load();
do
{
auto space = len_ * sysconf(_SC_PAGE_SIZE) - i;
ret_index = s <= space ? i : 0; // Wrap if needed
new_index = ret_index + s;
} while (top_.compare_exchange_weak(i, new_index)); // succeed if value did of i not change
return addr_+ ret_index;
}
len_ * sysconf(_SC_PAGE_SIZE) is in a few places, so might be the more useful value to store in len_ itself.

Save exr/pfm as little endian

I am load a bmp file into a CImg object and I save it into pfm file. Successful. And this .pfm file I am using it into another library, but this library doesn't accept big-endian, just little endian.
CImg<float> image;
image.load_bmp(_T("D:\\Temp\\memorial.bmp"));
image.normalize(0.0, 1.0);
image.save_pfm(_T("D:\\Temp\\memorial.pfm"));
So, how can I save bmp file to pfm file as little endian, not big endian .. it is possible ?
Later edit:
I have checked first 5 elements from .pfm header file. This is the result without invert_endianness:
CImg<float> image;
image.load_bmp(_T("D:\\Temp\\memorial.bmp"));
image.normalize(0.0, 1.0);
image.save_pfm(_T("D:\\Temp\\memorial.pfm"));
PF
512
768
1.0
=øøù=€€=‘>
and this is the result with invert_endianness:
CImg<float> image;
image.load_bmp(_T("D:\\Temp\\memorial.bmp"));
image.invert_endianness();
image.normalize(0.0, 1.0);
image.save_pfm(_T("D:\\Temp\\memorial.pfm"));
PF
512
768
1.0
?yôx?!ù=‚ì:„ç‹?
Result is the same.
This is definitely not a proper answer but might work as a workaround for the time being.
I didn't find out how to properly invert the endianness using CImgs functions, so I modified the resulting file instead. It's a hack. The result opens fine in GIMP an looks very close to the original image, but I can't say if it works with the library you are using. It may be worth a try.
Comments in the code:
#include "CImg/CImg.h"
#include <algorithm>
#include <filesystem> // >= C++17 must be selected as Language Standard
#include <ios>
#include <iostream>
#include <iterator>
#include <fstream>
#include <string>
using namespace cimg_library;
namespace fs = std::filesystem;
// a class to remove temporary files
class remove_after_use {
public:
remove_after_use(const std::string& filename) : name(filename) {}
remove_after_use(const remove_after_use&) = delete;
remove_after_use& operator=(const remove_after_use&) = delete;
const char* c_str() const { return name.c_str(); }
operator std::string const& () const { return name; }
~remove_after_use() {
try {
fs::remove(name);
}
catch (const std::exception & ex) {
std::cerr << "remove_after_use: " << ex.what() << "\n";
}
}
private:
std::string name;
};
// The function to hack the file saved by CImg
template<typename T>
bool save_pfm_endianness_inverted(const T& img, const std::string& filename) {
remove_after_use tempfile("tmp.pfm");
// get CImg's endianness inverted image and save it to a temporary file
img.get_invert_endianness().save_pfm(tempfile.c_str());
// open the final file
std::ofstream os(filename, std::ios::binary);
// read "tmp.pfm" and modify
// The Scale Factor / Endianness line
if (std::ifstream is; os && (is = std::ifstream(tempfile, std::ios::binary))) {
std::string lines[3];
// Read the 3 PFM header lines as they happen to be formatted by
// CImg. Will maybe not work with another library.
size_t co = 0;
for (; co < std::size(lines) && std::getline(is, lines[co]); ++co);
if (co == std::size(lines)) { // success
// write the first two lines back unharmed:
os << lines[0] << '\n' << lines[1] << '\n';
if (lines[2].empty()) {
std::cerr << "something is wrong with the pfm header\n";
return false;
}
// add a '-' if it's missing, remove it if it's there:
if (lines[2][0] == '-') { // remove the - to invert
os << lines[2].substr(1);
}
else { // add a - to invert
os << '-' << lines[2] << '\n';
}
// copy all the rest as-is:
std::copy(std::istreambuf_iterator<char>(is),
std::istreambuf_iterator<char>{},
std::ostreambuf_iterator<char>(os));
}
else {
std::cerr << "failed reading pfm header\n";
return false;
}
}
else {
std::cerr << "opening files failed\n";
return false;
}
return true;
}
int main()
{
CImg<float> img("memorial.bmp");
img.normalize(0.f, 1.f);
std::cout << "saved ok: " << std::boolalpha
<< save_pfm_endianness_inverted(img, "memorial.pfm") << "\n";
}
Wanting to solve the same issue in classic C++ style (as language sake), I wrote:
BOOL CMyDoc::SavePfmEndiannessInverted(CImg<float>& img, const CString sFileName)
{
CString sDrive, sDir;
_splitpath(sFileName, sDrive.GetBuffer(), sDir.GetBuffer(), NULL, NULL);
CString sTemp;
sTemp.Format(_T("%s%sTemp.tmp"), sDrive, sDir);
sDrive.ReleaseBuffer();
sDir.ReleaseBuffer();
CRemoveAfterUse TempFile(sTemp);
img.get_invert_endianness().save_pfm(TempFile.c_str());
CFile fileTemp;
if (! fileTemp.Open(sTemp, CFile::typeBinary))
return FALSE;
char c;
UINT nRead = 0;
int nCount = 0;
ULONGLONG nPosition = 0;
CString sScale;
CByteArray arrHeader, arrData;
do
{
nRead = fileTemp.Read((char*)&c, sizeof(char));
switch (nCount)
{
case 0:
case 1:
arrHeader.Add(static_cast<BYTE>(c));
break;
case 2: // retrieve the '1.0' string
sScale += c;
break;
}
if ('\n' == c) // is new line
{
nCount++;
}
if (nCount >= 3) // read the header, so go out
{
nPosition = fileTemp.GetPosition();
break;
}
}while (nRead > 0);
if (nPosition > 1)
{
arrData.SetSize(fileTemp.GetLength() - nPosition);
fileTemp.Read(arrData.GetData(), (UINT)arrData.GetSize());
}
fileTemp.Close();
CFile file;
if (! file.Open(sFileName, CFile::typeBinary | CFile::modeCreate | CFile::modeReadWrite))
return FALSE;
CByteArray arrTemp;
ConvertCStringToByteArray(sScale, arrTemp);
arrHeader.Append(arrTemp);
arrHeader.Append(arrData);
file.Write(arrHeader.GetData(), (UINT)arrHeader.GetSize());
file.Close();
return TRUE;
}
But seem to not do the job, because the image result is darker
What I have done wrong here ? The code seem to me very clear, still, is not work as expected ...
Of course, this approach is more inefficient, I know, but as I said before, just for language sake.
I think it is nothing wrong with my code :)
Here is the trial:
CImg<float> image;
image.load_bmp(_T("D:\\Temp\\memorial.bmp"));
image.normalize(0.0f, 1.0f);
image.save_pfm(_T("D:\\Temp\\memorial.pfm"));
image.get_invert_endianness().save(_T("D:\\Temp\\memorial_inverted.pfm"));
and the memorial.pfm looks like this:
and memorial_inverted.pfm looks like this:

Converting asio read_some to async version

I have the following code that reads from a TCP socket using boost asio read_some function. Currently the code is synchronous and I need to convert it to the async version. The issue is initially that some bytes are read to identify the packettype and to get the length of the packet. Then we have a loop that reads the data. Would I need to use two callbacks to do this asynchronously or can it be done with one ( which would be preferable).
void Transport::OnReadFromTcp()
{
int read = 0;
// read 7 bytes from TCP into mTcpBuffer
m_sslsock->read_some(asio::buffer(mTcpBuffer, 7));
bool tag = true;
for (unsigned char i = 0; i < 5; i++)
{
tag = tag && (mTcpBuffer[i] == g_TcpPacketTag[i]);
}
// get the length from the last two bytes
unsigned short dataLen = (mTcpBuffer[5] ) | (mTcpBuffer[6] << 8);
mBuff = new char[dataLen];
int readTotal = 0;
while (readTotal < dataLen)
{
// read lengths worth of data from tcp pipe into buffer
int readlen = dataLen;
size_t read = m_sslsock->read_some(asio::buffer(&mBuff[readTotal], readlen));
readlen = dataLen - read;
readTotal += read;
}
// Process data .....
}
The first step is to realize that you can remove the read_some loop entireyl using the free function read:
void Transport::OnReadFromTcp() {
int read = 0;
// read 7 bytes from TCP into mTcpBuffer
size_t bytes = asio::read(*m_sslsock, asio::buffer(mTcpBuffer, 7), asio::transfer_all());
assert(bytes == 7);
bool tag = g_TcpPacketTag.end() == std::mismatch(
g_TcpPacketTag.begin(), g_TcpPacketTag.end(),
mTcpBuffer.begin(), mTcpBuffer.end())
.first;
// get the length from the last two bytes
uint16_t const dataLen = mTcpBuffer[5] | (mTcpBuffer[6] << 8);
mBuff.resize(dataLen);
size_t readTotal = asio::read(*m_sslsock, asio::buffer(mBuff), asio::transfer_exactly(dataLen));
assert(mBuff.size() == readTotal);
assert(dataLen == readTotal);
}
That's even regardless of whether execution is asynchronous.
Making it asynchronous is slightly involved, because it requires assumptions about lifetime of the buffers/Transport instance as well as potential multi-threading. I'll provide a sample of that after my morning coffee :)
Demo without threading/lifetime complications:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <array>
#include <cassert>
namespace asio = boost::asio;
namespace ssl = asio::ssl;
namespace {
static std::array<char, 5> g_TcpPacketTag {{'A','B','C','D','E'}};
}
struct Transport {
using tcp = asio::ip::tcp;
using SslSocket = std::shared_ptr<asio::ssl::stream<tcp::socket> >;
Transport(SslSocket s) : m_sslsock(s) { }
void OnReadFromTcp();
void OnHeaderReceived(boost::system::error_code ec, size_t transferred);
void OnContentReceived(boost::system::error_code ec, size_t transferred);
private:
uint16_t datalen() const {
return mTcpBuffer[5] | (mTcpBuffer[6] << 8);
}
SslSocket m_sslsock;
std::array<char, 7> mTcpBuffer;
std::vector<char> mBuff;
};
void Transport::OnReadFromTcp() {
// read 7 bytes from TCP into mTcpBuffer
asio::async_read(*m_sslsock, asio::buffer(mTcpBuffer, 7), asio::transfer_all(),
boost::bind(&Transport::OnHeaderReceived, this, asio::placeholders::error, asio::placeholders::bytes_transferred)
);
}
#include <boost/range/algorithm/mismatch.hpp> // I love sugar
void Transport::OnHeaderReceived(boost::system::error_code ec, size_t bytes) {
if (ec) {
std::cout << "Error: " << ec.message() << "\n";
}
assert(bytes == 7);
bool tag = (g_TcpPacketTag.end() == boost::mismatch(g_TcpPacketTag, mTcpBuffer).first);
if (tag) {
// get the length from the last two bytes
mBuff.resize(datalen());
asio::async_read(*m_sslsock, asio::buffer(mBuff), asio::transfer_exactly(datalen()),
boost::bind(&Transport::OnContentReceived, this, asio::placeholders::error, asio::placeholders::bytes_transferred)
);
} else {
std::cout << "TAG MISMATCH\n"; // TODO handle error
}
}
void Transport::OnContentReceived(boost::system::error_code ec, size_t readTotal) {
assert(mBuff.size() == readTotal);
assert(datalen() == readTotal);
std::cout << "Successfully completed receive of " << datalen() << " bytes\n";
}
int main() {
asio::io_service svc;
using Socket = Transport::SslSocket::element_type;
// connect to localhost:6767 with SSL
ssl::context ctx(ssl::context::sslv23);
auto s = std::make_shared<Socket>(svc, ctx);
s->lowest_layer().connect({ {}, 6767 });
s->handshake(Socket::handshake_type::client);
// do transport
Transport tx(s);
tx.OnReadFromTcp();
svc.run();
// all done
std::cout << "All done\n";
}
When using against a sample server that accepts SSL connections on port 6767:
(printf "ABCDE\x01\x01F"; cat main.cpp) |
openssl s_server -accept 6767 -cert so.crt -pass pass:test
Prints:
Successfully completed receive of 257 bytes
All done

How to copy the output of linux command to a C++ variable

I'm calling a LINUX command from within a C++ programme which creates the following output. I need to copy the first column of the output to a C++ variable (say a long int). How can I do it?? If that is not possible how can I copy this result into a .txt file with which I can work with?
Edit
0 +0
2361294848 +2361294848
2411626496 +50331648
2545844224 +134217728
2713616384 +167772160
I have this stored as a file, file.txt and I'm using the following code to
extract the left column with out the 0 to store it at integers
string stringy="";
int can_can=0;
for(i=begin;i<length;i++)
{
if (buffer[i]==' ' && can_can ==1) //**buffer** is the whole text file read in char*
{
num=atoi(stringy.c_str());
array[univ]=num; // This where I store the values.
univ+=1;
can_can=1;
}
else if (buffer[i]==' ' && can_can ==0)
{
stringy="";
}
else if (buffer[i]=='+')
{can_can=0;}
else{stringy.append(buffer[i]);}
}
I'm getting a segmentation error for this. What can be done ?
Thanks in advance.
Just create a simple streambuf wrapper around popen()
#include <iostream>
#include <stdio.h>
struct SimpleBuffer: public std::streambuf
{
typedef std::streambuf::traits_type traits;
typedef traits::int_type int_type;
SimpleBuffer(std::string const& command)
: stream(popen(command.c_str(), "r"))
{
this->setg(&c[0], &c[0], &c[0]);
this->setp(0, 0);
}
~SimpleBuffer()
{
if (stream != NULL)
{
fclose(stream);
}
}
virtual int_type underflow()
{
std::size_t size = fread(c, 1, 100, stream);
this->setg(&c[0], &c[0], &c[size]);
return size == 0 ? EOF : *c;
}
private:
FILE* stream;
char c[100];
};
Usage:
int main()
{
SimpleBuffer buffer("echo 55 hi there Loki");
std::istream command(&buffer);
int value;
command >> value;
std::string line;
std::getline(command, line);
std::cout << "Got int(" << value << ") String (" << line << ")\n";
}
Result:
> ./a.out
Got int(55) String ( hi there Loki)
It is popen you're probably looking for. Try
man popen
.
Or see this little example:
#include <iostream>
#include <stdio.h>
using namespace std;
int main()
{
FILE *in;
char buff[512];
if(!(in = popen("my_script_from_command_line", "r"))){
return 1;
}
while(fgets(buff, sizeof(buff), in)!=NULL){
cout << buff; // here you have each line
// of the output of your script in buff
}
pclose(in);
return 0;
}
Unfortunately, it’s not easy since the platform API is written for C. The following is a simple working example:
#include <cstdio>
#include <iostream>
int main() {
char const* command = "ls -l";
FILE* fpipe = popen(command, "r");
if (not fpipe) {
std::cerr << "Unable to execute commmand\n";
return EXIT_FAILURE;
}
char buffer[256];
while (std::fgets(buffer, sizeof buffer, fpipe)) {
std::cout << buffer;
}
pclose(fpipe);
}
However, I’d suggest wrapping the FILE* handle in a RAII class to take care of resource management.
You probably want to use popen to execute the command. This will give you a FILE * that you can read its output from. From there, you can parse out the first number with (for example) something like:
fscanf(inpipe, "%d %*d", &first_num);
which, just like when reading from a file, you'll normally repeat until you receive an end of file indication, such as:
long total = 0;
while (1 == fscanf(inpipe, "%l %*d", &first_num))
total = first_num;
printf("%l\n", total);

Can anyone explain why my crypto++ decrypted file is 16 bytes short?

In order that I might feed AES encrypted text as an std::istream to a parser component I am trying to create a std::streambuf implementation wrapping the vanilla crypto++ encryption/decryption.
The main() function calls the following functions to compare my wrapper with the vanilla implementation:
EncryptFile() - encrypt file using my streambuf implementation
DecryptFile() - decrypt file using my streambuf implementation
EncryptFileVanilla() - encrypt file using vanilla crypto++
DecryptFileVanilla() - decrypt file using vanilla crypto++
The problem is that whilst the encrypted files created by EncryptFile() and EncryptFileVanilla() are identical. The decrypted file created by DecryptFile() is incorrect being 16 bytes short of that created by DecryptFileVanilla(). Probably not coincidentally the block size is also 16.
I think the issue must be in CryptStreamBuffer::GetNextChar(), but I've been staring at it and the crypto++ documentation for hours.
Can anybody help/explain?
Any other comments about how crummy or naive my std::streambuf implementation are also welcome ;-)
Thanks,
Tom
// Runtime Includes
#include <iostream>
// Crypto++ Includes
#include "aes.h"
#include "modes.h" // xxx_Mode< >
#include "filters.h" // StringSource and
// StreamTransformation
#include "files.h"
using namespace std;
class CryptStreamBuffer: public std::streambuf {
public:
CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c);
CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c);
~CryptStreamBuffer();
protected:
virtual int_type overflow(int_type ch = traits_type::eof());
virtual int_type uflow();
virtual int_type underflow();
virtual int_type pbackfail(int_type ch);
virtual int sync();
private:
int GetNextChar();
int m_NextChar; // Buffered character
CryptoPP::StreamTransformationFilter* m_StreamTransformationFilter;
CryptoPP::FileSource* m_Source;
CryptoPP::FileSink* m_Sink;
}; // class CryptStreamBuffer
CryptStreamBuffer::CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c) :
m_NextChar(traits_type::eof()),
m_StreamTransformationFilter(0),
m_Source(0),
m_Sink(0) {
m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, 0, CryptoPP::BlockPaddingSchemeDef::PKCS_PADDING);
m_Source = new CryptoPP::FileSource(encryptedInput, false, m_StreamTransformationFilter);
}
CryptStreamBuffer::CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c) :
m_NextChar(traits_type::eof()),
m_StreamTransformationFilter(0),
m_Source(0),
m_Sink(0) {
m_Sink = new CryptoPP::FileSink(encryptedOutput);
m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, m_Sink, CryptoPP::BlockPaddingSchemeDef::PKCS_PADDING);
}
CryptStreamBuffer::~CryptStreamBuffer() {
if (m_Sink) {
delete m_StreamTransformationFilter;
// m_StreamTransformationFilter owns and deletes m_Sink.
}
if (m_Source) {
delete m_Source;
// m_Source owns and deletes m_StreamTransformationFilter.
}
}
CryptStreamBuffer::int_type CryptStreamBuffer::overflow(int_type ch) {
return m_StreamTransformationFilter->Put((byte)ch);
}
CryptStreamBuffer::int_type CryptStreamBuffer::uflow() {
int_type result = GetNextChar();
// Reset the buffered character
m_NextChar = traits_type::eof();
return result;
}
CryptStreamBuffer::int_type CryptStreamBuffer::underflow() {
return GetNextChar();
}
CryptStreamBuffer::int_type CryptStreamBuffer::pbackfail(int_type ch) {
return traits_type::eof();
}
int CryptStreamBuffer::sync() {
// TODO: Not sure sync is the correct place to be doing this.
// Should it be in the destructor?
if (m_Sink) {
m_StreamTransformationFilter->MessageEnd();
// m_StreamTransformationFilter->Flush(true);
}
return 0;
}
int CryptStreamBuffer::GetNextChar() {
// If we have a buffered character do nothing
if (m_NextChar != traits_type::eof()) {
return m_NextChar;
}
// If there are no more bytes currently available then pump the source
if (m_StreamTransformationFilter->MaxRetrievable() == 0) {
m_Source->Pump(1024);
}
// Retrieve the next byte
byte nextByte;
size_t noBytes = m_StreamTransformationFilter->Get(nextByte);
if (0 == noBytes) {
return traits_type::eof();
}
// Buffer up the next character
m_NextChar = nextByte;
return m_NextChar;
}
void InitKey(byte key[]) {
key[0] = -62;
key[1] = 102;
key[2] = 78;
key[3] = 75;
key[4] = -96;
key[5] = 125;
key[6] = 66;
key[7] = 125;
key[8] = -95;
key[9] = -66;
key[10] = 114;
key[11] = 22;
key[12] = 48;
key[13] = 111;
key[14] = -51;
key[15] = 112;
}
/** Decrypt using my CryptStreamBuffer */
void DecryptFile(const char* sourceFileName, const char* destFileName) {
ifstream ifs(sourceFileName, ios::in | ios::binary);
ofstream ofs(destFileName, ios::out | ios::binary);
byte key[CryptoPP::AES::DEFAULT_KEYLENGTH];
InitKey(key);
CryptoPP::ECB_Mode<CryptoPP::AES>::Decryption decryptor(key, sizeof(key));
if (ifs) {
if (ofs) {
CryptStreamBuffer cryptBuf(ifs, decryptor);
std::istream decrypt(&cryptBuf);
int c;
while (EOF != (c = decrypt.get())) {
ofs << (char)c;
}
ofs.flush();
}
else {
std::cerr << "Failed to open file '" << destFileName << "'." << endl;
}
}
else {
std::cerr << "Failed to open file '" << sourceFileName << "'." << endl;
}
}
/** Encrypt using my CryptStreamBuffer */
void EncryptFile(const char* sourceFileName, const char* destFileName) {
ifstream ifs(sourceFileName, ios::in | ios::binary);
ofstream ofs(destFileName, ios::out | ios::binary);
byte key[CryptoPP::AES::DEFAULT_KEYLENGTH];
InitKey(key);
CryptoPP::ECB_Mode<CryptoPP::AES>::Encryption encryptor(key, sizeof(key));
if (ifs) {
if (ofs) {
CryptStreamBuffer cryptBuf(ofs, encryptor);
std::ostream encrypt(&cryptBuf);
int c;
while (EOF != (c = ifs.get())) {
encrypt << (char)c;
}
encrypt.flush();
}
else {
std::cerr << "Failed to open file '" << destFileName << "'." << endl;
}
}
else {
std::cerr << "Failed to open file '" << sourceFileName << "'." << endl;
}
}
/** Decrypt using vanilla crypto++ */
void DecryptFileVanilla(const char* sourceFileName, const char* destFileName) {
byte key[CryptoPP::AES::DEFAULT_KEYLENGTH];
InitKey(key);
CryptoPP::ECB_Mode<CryptoPP::AES>::Decryption decryptor(key, sizeof(key));
CryptoPP::FileSource(sourceFileName, true,
new CryptoPP::StreamTransformationFilter(decryptor,
new CryptoPP::FileSink(destFileName), CryptoPP::BlockPaddingSchemeDef::PKCS_PADDING
) // StreamTransformationFilter
); // FileSource
}
/** Encrypt using vanilla crypto++ */
void EncryptFileVanilla(const char* sourceFileName, const char* destFileName) {
byte key[CryptoPP::AES::DEFAULT_KEYLENGTH];
InitKey(key);
CryptoPP::ECB_Mode<CryptoPP::AES>::Encryption encryptor(key, sizeof(key));
CryptoPP::FileSource(sourceFileName, true,
new CryptoPP::StreamTransformationFilter(encryptor,
new CryptoPP::FileSink(destFileName), CryptoPP::BlockPaddingSchemeDef::PKCS_PADDING
) // StreamTransformationFilter
); // FileSource
}
int main(int argc, char* argv[])
{
EncryptFile(argv[1], "encrypted.out");
DecryptFile("encrypted.out", "decrypted.out");
EncryptFileVanilla(argv[1], "encrypted_vanilla.out");
DecryptFileVanilla("encrypted_vanilla.out", "decrypted_vanilla.out");
return 0;
}
After working with a debug build of crypto++ it turns out that what was missing was a call to the StreamTransformationFilter advising it that there would be nothing more coming from the Source and that it should wrap up the processing of the final few bytes, including the padding.
In CryptStreamBuffer::GetNextChar():
Replace:
// If there are no more bytes currently available then pump the source
if (m_StreamTransformationFilter->MaxRetrievable() == 0) {
m_Source->Pump(1024);
}
With:
// If there are no more bytes currently available from the filter then
// pump the source.
if (m_StreamTransformationFilter->MaxRetrievable() == 0) {
if (0 == m_Source->Pump(1024)) {
// This seems to be required to ensure the final bytes are readable
// from the filter.
m_StreamTransformationFilter->ChannelMessageEnd(CryptoPP::DEFAULT_CHANNEL);
}
}
I make no claims that this is the best solution, just one I discovered by trial and error that appears to work.
If your input buffer is not a multiplicity of a 16-byte block, you need to stuff the last block with dummy bytes. If the last block is less than 16 bytes it is dropped by crypto++ and not encrypted. When decrypting, you need to truncate the dummy bytes.
That 'another way' you are referring to, already does the addition and truncation for you.
So what should be the dummy bytes, to know how many of them there is, thus should be truncated? I use the following pattern: fill each byte with the value of dummies count.
Examples: You need to add 8 bytes? set them to 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08. You need to add 3 bytes? set them to 0x03, 0x03, 0x03 etc.
When decrypting, get the value of last byte of the output buffer. Assume it is N. Check, if the values last N bytes are equal to N. Truncate, if true.
UPDATE:
CryptStreamBuffer::CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c) :
m_NextChar(traits_type::eof()),
m_StreamTransformationFilter(0),
m_Source(0),
m_Sink(0) {
m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, 0, CryptoPP::BlockPaddingSchemeDef::ZEROS_PADDING);
m_Source = new CryptoPP::FileSource(encryptedInput, false, m_StreamTransformationFilter);
}
CryptStreamBuffer::CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c) :
m_NextChar(traits_type::eof()),
m_StreamTransformationFilter(0),
m_Source(0),
m_Sink(0) {
m_Sink = new CryptoPP::FileSink(encryptedOutput);
m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, m_Sink, CryptoPP::BlockPaddingSchemeDef::ZEROS_PADDING);
}
Setting the ZEROS_PADDING made your code working (tested on text files). However why it does not work with DEFAULT_PADDING - I did not find the cause yet.