protobuf serializetocodedstream returns "false" - c++

I am trying to serialize my Protocol Buffer message in Windows platform and my coding language is C++. After serialization is done it returns "false". Please find the below code and let me know where I am going wrong.
Proto file
message mobile_list{
required string name = 1;
required DeviceType type = 2;
required string imei = 3;
required bytes wifiAddress = 4;
optional bytes macAddress = 5;
}
Protocol buffer Code
#include <unistd.h>
#include "mobile.pb.h"
#include <iostream>
#include <google/protobuf/message.h>
#include <google/protobuf/descriptor.h>
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/io/coded_stream.h>
#include <google/protobuf/io/zero_copy_stream_impl_lite.h>
using namespace google::protobuf::io;
using namespace std;
int main(int argv, char** argc){
mobile payload;
payload.set_name("Testing");
payload.set_type(mobile::Android);
payload.set_imei("123456");
payload.set_wifiAddress("96-00-OM-1E-4R-99");
payload.set_macAddress("96-00-OM-1E-4R-99");
int siz = payload.ByteSize();
char *pkt = new char [siz];
google::protobuf::io::ArrayOutputStream as(pkt,siz);
CodedOutputStream *coded_output = new CodedOutputStream(&as);
coded_output->WriteVarint32(payload.ByteSize());
payload.SerializeToCodedStream(coded_output);
return a.exec();
}

You are allocating a buffer equal to the size of the message (payload.ByteSize()), but then you are trying to write into it a varint followed by the actual message. This adds up to more than the message size. The serialization is failing because it ran out of space.
You should do:
int siz = payload.ByteSize();
siz += CodedOutputStream::VarintSize32(siz);
// ... continue as before ...
Also, on an unrelated note, you are calling ByteSize() multiple times, which is wasteful because the whole message has to be scanned and counted each time. Instead, you should keep a copy of the original size to reuse.

Related

Access Violation when using OpenSSL's Camellia

I'm trying to write a camellia decryption program in windows using c++ as the language and OpenSSL as the cryptographic provider. When attempting to execute the code I get the following error Exception thrown at 0x00007FFABB31AEF8 (libcrypto-3-x64.dll) in Lab8.exe: 0xC0000005: Access violation reading location 0x0000000000000028.
The code is:
#include <iostream>
#include <windows.h>
#include <openssl/camellia.h>
#include <openssl/conf.h>
#include <openssl/err.h>
#include <string.h>
#pragma warning(disable : 4996)
unsigned char iv[] = "\xd4\xc5\x91\xad\xe5\x7e\x56\x69\xcc\xcd\xb7\x11\xcf\x02\xec\xbc";
unsigned char camcipher[] = "\x00\xf7\x41\x73\x04\x5b\x99\xea\xe5\x6d\x41\x8e\xc4\x4d\x21\x5c";
const unsigned char camkey[] = "\x92\x63\x88\x77\x9b\x02\xad\x91\x3f\xd9\xd2\x45\xb1\x92\x21\x5f\x9d\x48\x35\xd5\x6e\xf0\xe7\x3a\x39\x26\xf7\x92\xf7\x89\x5d\x75";
unsigned char plaintext;
CAMELLIA_KEY finalkey;
int main()
{
Camellia_set_key(camkey, 256, &finalkey);
Camellia_cbc_encrypt(camcipher, (unsigned char*)plaintext, CAMELLIA_BLOCK_SIZE,&finalkey, iv, 0);
std::cout << plaintext;
}
The Key and IV were generated using urandom from python3 and then used to create the cipher text using the PyCryto library camellia.
I purposefully left the cipher text at 16 Bytes to avoid padding. I'm really not sure what I'm doing wrong at all. Any help would be awesome.
The plaintext should read "a secret message"
Looks like you need to declare unsigned char plaintext; to be unsigned char plaintext[17];, otherwise you're overwriting uninitialized memory.

how can I write pixel color data to a bmp image file with stb_image?

I've already opened the bmp file( one channel grayscale) and stored each pixel color in a new line as hex.
after some doing processes on the data (not the point of this question), I need to export a bmp image from my data.
how can I load the textfile(data) and use stb_image_write?
pixel to image :
#include <cstdio>
#include <cstdlib>
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"
using namespace std;
int main() {
FILE* datafile ;
datafile = fopen("pixeldata.x" , "w");
unsigned char* pixeldata ;//???
char Image2[14] = "image_out.bmp";
stbi_write_bmp(Image2, 512, 512, 1, pixeldata);
image to pixel:
#include <cstdio>
#include <cstdlib>
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"
using namespace std;
const size_t total_pixel = 512*512;
int main() {
FILE* datafile ;
datafile = fopen("pixeldata.x" , "w");
char Image[10] = "image.bmp";
int witdth;
int height;
int channels;
unsigned char *pixeldata = stbi_load( (Image) , &witdth, &height, &channels, 1);
if(pixeldata != NULL){
for(int i=0; i<total_pixel; i++)
{
fprintf(datafile,"%x%s", pixeldata[i],"\n");
}
}
}
There are a lot of weaknesses in the question – too much to sort this out in comments...
This question is tagged C++. Why the error-prone fprintf()? Why not std::fstream? It has similar capabilities (if not even more) but adds type-safety (which printf() family cannot provide).
The counter-part of fprintf() is fscanf(). The formatters are similar but the storage type has to be configured in formatters even more carefully than in fprintf().
If the first code sample is the attempt to read pixels back from datafile.x... Why datafile = fopen("pixeldata.x" , "w");? To open a file with fopen() for reading, it should be "r".
char Image2[14] = "image_out.bmp"; is correct (if I counted correctly) but maintenance-unfriendly. Let the compiler do the work for you:
char Image2[] = "image_out.bmp";
To provide storage for pixel data with (in OPs case) fixed size of 512 × 512 bytes, the simplest would be:
unsigned char pixeldata[512 * 512];
Storing an array of that size (512 × 512 = 262144 Bytes = 256 KByte) in a local variable might be seen as potential issue by certain people. The alternative would be to use a std::vector<unsigned char> pixeldata; instead. (std::vector allocates storage dynamically in heap memory where local variables usually on a kind of stack memory which in turn is usually of limited size.)
Concerning the std::vector<unsigned char> pixeldata;, I see two options:
definition with pre-allocation:
std::vector<unsigned char> pixeldata(512 * 512);
so that it can be used just like the array above.
definition without pre-allocation:
std::vector<unsigned char> pixeldata;
That would allow to add every read pixel just to the end with std::vector::push_back().
May be, it's worth to reserve the final size beforehand as it's known from beginning:
std::vector<unsigned char> pixeldata;
pixeldata.reserve(512 * 512); // size reserved but not yet used
So, this is how it could look finally:
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <vector>
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"
int main()
{
const int w = 512, h = 512;
// read data
FILE *datafile = fopen("pixeldata.x" , "r");
if (!datafile) { // success of file open should be tested ALWAYS
std::cerr << "Cannot open 'pixeldata.x'!\n";
return -1; // ERROR! (bail out)
}
typedef unsigned char uchar; // for convenience
std::vector<uchar> pixeldata(w * h);
char Image2[] = "image_out.bmp";
for (int i = 0, n = w * h; i < n; ++i) {
if (fscanf(datafile, "%hhx", &pixeldata[i]) < 1) {
std::cerr << "Failed to read value " << i << of 'pixeldata.x'!\n";
return -1; // ERROR! (bail out)
}
}
fclose(datafile);
// write BMP image
stbi_write_bmp(Image2, w, h, 1, pixeldata.data());
// Actually, success of this should be tested as well.
// done
return 0;
}
Some additional notes:
Please, take this code with a grain of salt. I haven't compiled or tested it. (I leave this as task to OP but will react on "bug reports".)
I silently removed using namespace std;: SO: Why is “using namespace std” considered bad practice?
I added checking of success of file operations. File operations are something which are always good for failing for a lot of reasons. For file writing, even the fclose() should be tested. Written data might be cached until file is closed and just writing the cached data to file might fail (because just this might overflow the available volume space).
OP used magic numbers (image width and size) which is considered as bad practice. It makes code maintenance-unfriendly and might be harder to understand for other readers: SO: What is a magic number, and why is it bad?

Uncompress data from Tiled json file with zlib C++

I have been trying to read, decode and then compress data from a json Tiled file such as the one below:
{ "height":40,
"layers":[
{
"compression":"zlib",
"data":"eJztmNkKwjAQRaN9cAPrAq5Yq3Xf6v9\/nSM2VIbQJjEZR+nDwQZScrwztoORECLySBcIgZ7nc2y4KfyWDLx+Jb9nViNgDEwY+KioAXUgQN4+zpoCMwPmQAtoAx2CLFbA2oDEo9+hwG8DnIDtF\/2K8ks086Tw2zH0uyMv7HcRr\/6\/EvvhnsPrsrxwX7rwU\/0ODig\/eV3mh3N1ld8eraWPaX6+64s9McesfrqcHfg1MpoifxcVEWjukyw+9AtFPl\/I71pER3Of6j4bv7HI54s+MChhqLlPdZ\/P3qMmFuo5h5NnTOhjM5tReN2yT51n5\/v7J3F0vi46fk+ne7aX0i9l6If7mpufTX3f5wsqv9TAD2fJLT9VrTn7UeZnM5tR+v0LMQOHXwFnxe2\/warGFRWf8QDjOLfP",
"encoding":"base64",
"height":40,
"name":"Ground",
"opacity":1,
"type":"tilelayer",
"visible":true,
"width":40,
"x":0,
"y":0
}],
"nextobjectid":1,
"orientation":"orthogonal",
"properties":
{
},
"renderorder":"right-down",
"tileheight":32,
"tilesets":[
{
"firstgid":1,
"source":"..\/..\/..\/Volumes\/Tiled 0.14.2\/examples\/desert.tsx"
}],
"tilewidth":32,
"version":1,
"width":40
}
I'm using the libraries
1. "json" (https://github.com/nlohmann/json),
2. "base64" (http://www.adp-gmbh.ch/cpp/common/base64.html) and
3. "zlib" (http://zlib.net).
This is my code:
#include <iostream>
#include <fstream>
#include <string>
#include "json.hpp"
#include "base64.hpp"
#include "zlib.h"
using json = nlohmann::json;
using namespace std;
int main(int argc, const char * argv[]) {
// Get string from json file
ifstream t("/Users/Klas/Desktop/testmap_zlib_compressed.json");
stringstream ss;
ss << t.rdbuf();
string sd = ss.str();
// Parse json string
auto j = json::parse(sd);
// Get encoded data
string encoded = j["layers"][0]["data"];
printf("Encoded: \n\n%s\n\n", encoded.c_str());
// Decode encoded data
string decoded = base64_decode(encoded);
// Convert string to char array
char b[decoded.size() + 1];
strcpy(b, decoded.c_str());
// Set size of uncompressed and compressed data
uLong h = j["layers"][0]["height"];
uLong w = j["layers"][0]["width"];
uLong ucompSize = w * h * 4; // Estimate
uLong compSize = strlen(b);
char c[ucompSize];
printf("Decoded (Compressed): \n\n%s\n\n\n", b);
// Uncompress data
uncompress((Bytef *)c, &ucompSize, (Bytef *)b, compSize);
printf("Decoded (Uncompressed): \n\n%s\n\n\n", c);
return 0;
}
When I run the program with the json file I get the output:
Encoded:
eJztmNkKwjAQRaN9cAPrAq5Yq3Xf6v9/nSM2VIbQJjEZR+nDwQZScrwztoORECLySBcIgZ7nc2y4KfyWDLx+Jb9nViNgDEwY+KioAXUgQN4+zpoCMwPmQAtoAx2CLFbA2oDEo9+hwG8DnIDtF/2K8ks086Tw2zH0uyMv7HcRr/6/EvvhnsPrsrxwX7rwU/0ODig/eV3mh3N1ld8eraWPaX6+64s9McesfrqcHfg1MpoifxcVEWjukyw+9AtFPl/I71pER3Of6j4bv7HI54s+MChhqLlPdZ/P3qMmFuo5h5NnTOhjM5tReN2yT51n5/v7J3F0vi46fk+ne7aX0i9l6If7mpufTX3f5wsqv9TAD2fJLT9VrTn7UeZnM5tR+v0LMQOHXwFnxe2/warGFRWf8QDjOLfP
Decoded (Compressed):
x\234\355\230\331
\3020E\243}p\353\256X\253u\337\352\377\235#6T\206\320&1G\351\303\301Rr\2743\266\203\221"\362H\201\236\347sl\270)\374\226\274~%\277gV#`L\370\250\250u #\336>Κ3\346#h\202,V\300ڀģߡ\300o\234\200\355\375\212\362K4\363\244\360\3331\364\273#/\354w\257\376\277\373\341\236\303벼p_\272\360S\375(?y]\346\207su\225\337\255\245\217i~\276\353\213=1Ǭ~\272\234\37052\232"h\356\223,>\364E>_\310\357ZDGs\237\352>\277\261\310\347\213>0(a\250\271Ou\237\317ޣ&\3529\207\223gL\350c3\233QxݲO\235g\347\373\373'qt\276.:~O\247{\266\227\322/e\350\207\373\232\233\237M}\337\347*\277\324\300g\311-?U\2559\373Q\346g3\233Q\372\3751\207_g\305\355\277\301\252\306\237\361
Decoded (Uncompressed):
Program ended with exit code: 0
Everything seems to be working fine before it comes to the uncompressing. I'm not sure what goes wrong. Any help to figure this out is appreciated.
You can't use strlen() on binary data. If there is a zero in there, it has nothing to do with the length of the binary data. If there isn't a zero in there, you will run off the end of the data looking for a zero. Use decoded.size().
You can't use strcpy() for the same reason. Use memcpy(). Or in this case I don't see why you would even copy it. Just give decoded.str() and decoded.size() to uncompress().
You can't necessarily print the compressed or uncompressed data as a string (%s), again for the same reason. In fact, the uncompressed data in this case consists mostly of zeros.

Creating a file on desktop (C++)

Currently I'm using windows 8.1....
in C++ when I'm trying to create a file on desktop with these codes ...
#include "stdafx.h"
#include <fstream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
ofstream myfile("C:/Users/%USERPROFILE%/Desktop/myfile.anything");
//ofstream myfile("C:/users/myfile.anything"); //Works fine with run As Administrator
return 0;
}
so the problems are completely clear 1.the userprofile don't know why? and 2.i should run the program as administrator but in here there is no need for run as....
i wanted to know if there is a little more simple way ....
Thanks
As the comments point out, you're trying to use an environment variable in your filepath, and the standard iostreams don't do environment variable expansion. You'll have to do that part yourself with platform-specific code, or simply use "normal" filepaths.
For C++ on Windows, the function to do this is GetEnvironmentVariable. It's one of those functions that takes a fixed size buffer, so using it is finicky enough that there's already a stackoverflow question all about how to call it correctly.
P.S. As the comments also pointed out, in places that do perform environment variable expansion (such as shell scripts or Windows Explorer), it's actually %USERPROFILE%, not &USERPROFILE&.
The comments to the other question were correct. Here's a basic way of fixing this (using http://msdn.microsoft.com/en-us/library/windows/desktop/ms683188%28v=vs.85%29.aspx)
#include <fstream>
#include <Windows.h>
#include <string>
using namespace std;
int main() {
WCHAR *buffer = new WCHAR[260];
const WCHAR name[12] = "USERPROFILE";
DWORD result = GetEnvironmentVariable(name, buffer, 260);
if (result > 260) {
delete[] buffer; buffer = new WCHAR[result];
GetEnvironmentVariable(name, buffer, result);
}
wstring s("C:/Users/");
s += buffer;
s += "/Desktop/myfile.anything";
ofstream myfile(s.c_str());
// do things here
delete[] buffer;
return 0;
}
You have many ways to get user profile directory :
via the environment variable USERPROFILE :
#include <cstdlib>
...
string profile = getenv("USERPROFILE");
via Windows API, but it is bit harder :
#include <windows.h>
#include <userenv.h>
...
HANDLE processToken = ::GetCurrentProcess();
HANDLE user;
BOOL cr = ::OpenProcessToken(processToken, TOKEN_ALL_ACCESS, &user);
DWORD size = 2;
char * buff = new char[size];
cr = ::GetUserProfileDirectoryA(user, buff, &size); // find necessary size
delete[] buff;
buff = new char[size];
cr = ::GetUserProfileDirectoryA(user, buff, &size);
string profile = buff;
delete[] buff;
and you have to link with userenv.lib - the tests for return codes are left as an exercise :-)
via ExpandEnvironmentString :
size = ::ExpandEnvironmentStringsA("%USERPROFILE%\\Desktop\\myfile.anything",
NULL, 2);
buff = new char[size];
size = ::ExpandEnvironmentStringsA("%USERPROFILE%\\Desktop\\myfile.anything",
buff, size);
string profile = buff;
delete[] buff;
With third way you have directly your string, with first and second you only get profile directory and still have to concatenate it with relevant path.
But in fact, if you want you program to be language independant, you should really use SHGetSpecialFolderPath API function :
#include <shlobj.h>
...
buff = new char[255];
SHGetSpecialFolderPathA(HWND_DESKTOP, buff, CSIDL_DESKTOPDIRECTORY, FALSE);
string desktop = buff;
delete[] buff;
Because on my old XP box in french, Desktop is actually Bureau ...

Read file to memory, loop through data, then write file [duplicate]

This question already has answers here:
How to read line by line after i read a text into a buffer?
(4 answers)
Closed 10 years ago.
I'm trying to ask a similar question to this post:
C: read binary file to memory, alter buffer, write buffer to file
but the answers didn't help me (I'm new to c++ so I couldn't understand all of it)
How do I have a loop access the data in memory, and go through line by line so that I can write it to a file in a different format?
This is what I have:
#include <fstream>
#include <iostream>
#include <string>
#include <sstream>
#include <vector>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdlib.h>
using namespace std;
int main()
{
char* buffer;
char linearray[250];
int lineposition;
double filesize;
string linedata;
string a;
//obtain the file
FILE *inputfile;
inputfile = fopen("S050508-v3.txt", "r");
//find the filesize
fseek(inputfile, 0, SEEK_END);
filesize = ftell(inputfile);
rewind(inputfile);
//load the file into memory
buffer = (char*) malloc (sizeof(char)*filesize); //allocate mem
fread (buffer,filesize,1,inputfile); //read the file to the memory
fclose(inputfile);
//Check to see if file is correct in Memory
cout.write(buffer,filesize);
free(buffer);
}
I appreciate any help!
Edit (More info on the data):
My data is different files that vary between 5 and 10gb. There are about 300 million lines of data. Each line looks like
M359
T359 3520 359
M400
A3592 zng 392
Where the first element is a character, and the remaining items could be numbers or characters. I'm trying to read this into memory since it will be a lot faster to loop through line by line, than reading a line, processing, and then writing. I am compiling in 64bit linux. Let me know if I need to clarify further. Again thank you.
Edit 2
I am using a switch statement to process each line, where the first character of each line determines how to format the rest of the line. For example 'M' means millisecond, and I put the next three numbers into a structure. Each line has a different first character that I need to do something different for.
So pardon the potentially blatantly obvious, but if you want to process this line by line, then...
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
int main(int argc, char *argv[])
{
// read lines one at a time
ifstream inf("S050508-v3.txt");
string line;
while (getline(inf, line))
{
// ... process line ...
}
inf.close();
return 0;
}
And just fill in the body of the while loop? Maybe I'm not seeing the real problem (a forest for the trees kinda thing).
EDIT
The OP is inline with using a custom streambuf which may not necessarily be the most portable thing in the world, but he's more interested in avoiding flipping back and forh between input and output files. With enough RAM, this should do the trick.
#include <iostream>
#include <fstream>
#include <iterator>
#include <memory>
using namespace std;
struct membuf : public std::streambuf
{
membuf(size_t len)
: streambuf()
, len(len)
, src(new char[ len ] )
{
setg(src.get(), src.get(), src.get() + len);
}
// direct buffer access for file load.
char * get() { return src.get(); };
size_t size() const { return len; };
private:
std::unique_ptr<char> src;
size_t len;
};
int main(int argc, char *argv[])
{
// open file in binary, retrieve length-by-end-seek
ifstream inf(argv[1], ios::in|ios::binary);
inf.seekg(0,inf.end);
size_t len = inf.tellg();
inf.seekg(0, inf.beg);
// allocate a steam buffer with an internal block
// large enough to hold the entire file.
membuf mb(len+1);
// use our membuf buffer for our file read-op.
inf.read(mb.get(), len);
mb.get()[len] = 0;
// use iss for your nefarious purposes
std::istream iss(&mb);
std::string s;
while (iss >> s)
cout << s << endl;
return EXIT_SUCCESS;
}
You should look into fgets and scanf, in which you can pull out matched pieces of data so it is easier to manipulate, assuming that is what you want to do. Something like this could look like:
FILE *input = fopen("file.txt", "r");
FILE *output = fopen("out.txt","w");
int bufferSize = 64;
char buffer[bufferSize];
while(fgets(buffer,bufferSize,input) != EOF){
char data[16];
sscanf(buffer,"regex",data);
//manipulate data
fprintf(output,"%s",data);
}
fclose(output);
fclose(input);
That would be more of the C way to do it, C++ handles things a little more eloquently by using an istream:
http://www.cplusplus.com/reference/istream/istream/
If I had to do this, I'd probably use code something like this:
std::ifstream in("S050508-v3.txt");
std::istringstream buffer;
buffer << in.rdbuf();
std::string data = buffer.str();
if (check_for_good_data(data))
std::cout << data;
This assumes you really need the entire contents of the input file in memory at once to determine whether it should be copied to output or not. If (for example) you can look at the data one byte at a time, and determine whether that byte should be copied without looking at the others, you could do something more like:
std::ifstream in(...);
std::copy_if(std::istreambuf_iterator<char>(in),
std::istreambuf_iterator<char>(),
std::ostream_iterator<char>(std::cout, ""),
is_good_char);
...where is_good_char is a function that returns a bool saying whether that char should be included in the output or not.
Edit: the size of files you're dealing with mostly rules out the first possibility I've given above. You're also correct that reading and writing large chunks of data will almost certainly improve speed over working on one line at a time.