I'm trying to sign a message using the OpenSSL C api. The following code segfaults because of EXC_BAD_ACCESS during either of the calls to EVP_DigestSignFinal. I'm using OpenSSL 1.0.1g. I tried switching from the newer DigestSign* functions to the older Sign* functions, and it still segfaults.
private_key is set with EVP_PKEY_set1_RSA from an RSA key loaded from a PEM file. The first call to EVP_DigestSignFinal fills s_len with the maximum possible length of the signature for the signing algorithm, so signature not being big enough shouldn't be the issue, and the second call writes to the signature buffer and fills s_len with the length of the signature.
I would appreciate any help I could get with this.
vector<unsigned char> rsa_sha512_sign(
const vector<unsigned char>& document,
shared_ptr<EVP_PKEY> private_key) {
EVP_MD_CTX* md;
if (!(md = EVP_MD_CTX_create())) {
throw runtime_error("Error initializing ENV_MD_CTX.");
}
if (EVP_DigestSignInit(md, NULL, EVP_sha512(), NULL, private_key.get())
!= 1) {
throw runtime_error("Error in EVP_DigestSignInit.");
}
if (EVP_DigestSignUpdate(md, document.data(), document.size()) != 1) {
throw runtime_error("Error computing hash on document.");
}
size_t s_len;
if (EVP_DigestSignFinal(md, NULL, &s_len) != 1) { // Segfault here
throw runtime_error("Error determining maximum signature size.");
}
vector<unsigned char> signature(s_len);
if (EVP_DigestSignFinal(md, signature.data(), &s_len) != 1) { // or here (or both)
throw runtime_error("Error signing document.");
}
signature.resize(s_len);
EVP_MD_CTX_destroy(md);
return move(signature);
}
The problem is likely with how you are initializing private_key. Probably, you are mixing malloc() with delete, and corrupting the heap in the process. You need to provide shared_ptr the proper deleter for the pointer you feed to it if the pointer was not created with new.
shared_ptr<RSA> r(RSA_new(), RSA_free);
shared_ptr<EVP_PKEY> p(EVP_PKEY_new(), EVP_PKEY_free);
shared_ptr<BIGNUM> bn(BN_new(), BN_free);
vector<unsigned char> doc(0, 100);
BN_set_word(bn.get(), RSA_F4);
RSA_generate_key_ex(r.get(), 2048, bn.get(), 0);
EVP_PKEY_set1_RSA(p.get(), r.get());
rsa_sha512_sign(doc, p);
Related
My goal is to create a simple addon that opens a memory mapped file and gives node raw access to the buffer. The memory mapped file already exists so I don't need to create a new file -- error will be thrown if file not already created (by another process).
Here is what I have so far:
#include <nan.h>
#include <windows.h>
using namespace v8;
void get_shm(const Nan::FunctionCallbackInfo <v8::Value> &info) {
if (info.Length() != 1) {
return Nan::ThrowTypeError("Wrong number of arguments. You must specify size of mapped file!");
}
if (!info[0]->IsNumber()) {
return Nan::ThrowTypeError("Bad argument. Size of mapped file must be a number!");
}
int len = info[0]->Uint32Value();
HANDLE h = OpenFileMappingA(FILE_MAP_WRITE | FILE_MAP_READ, 1, "Global\\_GB_SharedMemory_Read");
if (h == NULL) {
Nan::ThrowTypeError("Couldn't open memory mapped file");
}
char *buf = (char *) MapViewOfFile(h, FILE_MAP_WRITE | FILE_MAP_READ, 0, 0, len);
Nan::MaybeLocal<Object> jsbuf = Nan::NewBuffer(buf, len);
info.GetReturnValue().Set(jsbuf.ToLocalChecked());
}
void Init(v8::Local <v8::Object> exports) {
exports->Set(Nan::New("get_shm").ToLocalChecked(),
Nan::New<v8::FunctionTemplate>(get_shm)->GetFunction());
}
NODE_MODULE(addon_shm, Init)
Unfortunately node is crashing (presumably due to access violation) at some point after the NewBuffer is returned. It doesn't crash immediately but some unpredictable time after.
I'm not attempting to read/write from the buffer in Javascript. The only line using the add on is this:
const buf=shm.get_shm(10);
Mapped file created by the other process is bigger than 10.
If I use malloc to create the buffer, problem doesn't occur.
If I just call get_shm(10) without storing the result, problem doesn't occur.
Struggling to understand the issue...
I am converting a project to a UWP App, and thus have been following guidelines outlined in the MSDN post here. The existing project heavily relies on CreateFile() to communicate with connected devices.
There are many posts in SO that show us how to get a CreateFile()-accepted device path using SetupAPI's SetupDiGetDeviceInterfaceDetail() Is there an alternative way to do this using the PnP Configuration Manager API? Or an alternative, user-mode way at all?
I had some hope when I saw this example in Windows Driver Samples github, but quickly became dismayed when I saw that the function they used in the sample is ironically not intended for developer use, as noted in this MSDN page.
function GetDevicePath in general correct and can be used as is. about difference between CM_*(..) and CM_*_Ex(.., HMACHINE hMachine) - the CM_*(..) simply call CM_*_Ex(.., NULL) - so for local computer versions with and without _Ex suffix the same.
about concrete GetDevicePath code - call CM_Get_Device_Interface_List_Size and than CM_Get_Device_Interface_List only once not 100% correct - because between this two calls new device with this interface can be arrived to system and buffer size returned by CM_Get_Device_Interface_List_Size can be already not enough for CM_Get_Device_Interface_List. of course possibility of this very low, and you can ignore this. but i prefer make code maximum theoretical correct and call this in loop, until we not receive error other than CR_BUFFER_SMALL. also need understand that CM_Get_Device_Interface_List return multiple, NULL-terminated Unicode strings - so we need iterate here. in [example] always used only first returned symbolic link name of an interface instance. but it can be more than 1 or at all - 0 (empty). so better name function - GetDevicePaths - note s at the end. i be use code like this:
ULONG GetDevicePaths(LPGUID InterfaceClassGuid, PWSTR* pbuf)
{
CONFIGRET err;
ULONG len = 1024;//first try with some reasonable buffer size, without call *_List_SizeW
for(PWSTR buf;;)
{
if (!(buf = (PWSTR)LocalAlloc(0, len * sizeof(WCHAR))))
{
return ERROR_NO_SYSTEM_RESOURCES;
}
switch (err = CM_Get_Device_Interface_ListW(InterfaceClassGuid, 0, buf, len, CM_GET_DEVICE_INTERFACE_LIST_PRESENT))
{
case CR_BUFFER_SMALL:
err = CM_Get_Device_Interface_List_SizeW(&len, InterfaceClassGuid, 0, CM_GET_DEVICE_INTERFACE_LIST_PRESENT);
default:
LocalFree(buf);
if (err)
{
return CM_MapCrToWin32Err(err, ERROR_UNIDENTIFIED_ERROR);
}
continue;
case CR_SUCCESS:
*pbuf = buf;
return NOERROR;
}
}
}
and usage example:
void example()
{
PWSTR buf, sz;
if (NOERROR == GetDevicePaths((GUID*)&GUID_DEVINTERFACE_VOLUME, &buf))
{
sz = buf;
while (*sz)
{
DbgPrint("%S\n", sz);
HANDLE hFile = CreateFile(sz, FILE_GENERIC_READ, FILE_SHARE_VALID_FLAGS, 0, OPEN_EXISTING, 0, 0);
if (hFile != INVALID_HANDLE_VALUE)
{
// do something
CloseHandle(hFile);
}
sz += 1 + wcslen(sz);
}
LocalFree(buf);
}
}
so we must not simply use in returned DevicePathS (sz) only first string, but iterate it
while (*sz)
{
// use sz
sz += 1 + wcslen(sz);
}
I got a valid Device Path to a USB Hub Device, and used it successfully to get various device descriptors by sending some IOCTLs, by using the function I posted in my own answer to another question
I'm reporting the same function below:
This function returns a list of NULL-terminated Device Paths (that's what we get from CM_Get_Device_Interface_List())
You need to pass it the DEVINST, and the wanted interface GUID.
Since both the DEVINST and interface GUID are specified, it is highly likely that CM_Get_Device_Interface_List() will return a single Device Path for that interface, but technically you should be prepared to get more than one result.
It is responsibility of the caller to delete[] the returned list if the function returns successfully (return code 0)
int GetDevInstInterfaces(DEVINST dev, LPGUID interfaceGUID, wchar_t**outIfaces, ULONG* outIfacesLen)
{
CONFIGRET cres;
if (!outIfaces)
return -1;
if (!outIfacesLen)
return -2;
// Get System Device ID
WCHAR sysDeviceID[256];
cres = CM_Get_Device_ID(dev, sysDeviceID, sizeof(sysDeviceID) / sizeof(sysDeviceID[0]), 0);
if (cres != CR_SUCCESS)
return -11;
// Get list size
ULONG ifaceListSize = 0;
cres = CM_Get_Device_Interface_List_Size(&ifaceListSize, interfaceGUID, sysDeviceID, CM_GET_DEVICE_INTERFACE_LIST_PRESENT);
if (cres != CR_SUCCESS)
return -12;
// Allocate memory for the list
wchar_t* ifaceList = new wchar_t[ifaceListSize];
// Populate the list
cres = CM_Get_Device_Interface_List(interfaceGUID, sysDeviceID, ifaceList, ifaceListSize, CM_GET_DEVICE_INTERFACE_LIST_PRESENT);
if (cres != CR_SUCCESS) {
delete[] ifaceList;
return -13;
}
// Return list
*outIfaces = ifaceList;
*outIfacesLen = ifaceListSize;
return 0;
}
Please note that, as RbMm already said in his answer, you may get a CR_BUFFER_SMALL error from the last CM_Get_Device_Interface_List() call, since the device list may have been changed in the time between the CM_Get_Device_Interface_List_Size() and CM_Get_Device_Interface_List() calls.
I'm trying to read / write multiple Protocol Buffers messages from files, in both C++ and Java. Google suggests writing length prefixes before the messages, but there's no way to do that by default (that I could see).
However, the Java API in version 2.1.0 received a set of "Delimited" I/O functions which apparently do that job:
parseDelimitedFrom
mergeDelimitedFrom
writeDelimitedTo
Are there C++ equivalents? And if not, what's the wire format for the size prefixes the Java API attaches, so I can parse those messages in C++?
Update:
These now exist in google/protobuf/util/delimited_message_util.h as of v3.3.0.
I'm a bit late to the party here, but the below implementations include some optimizations missing from the other answers and will not fail after 64MB of input (though it still enforces the 64MB limit on each individual message, just not on the whole stream).
(I am the author of the C++ and Java protobuf libraries, but I no longer work for Google. Sorry that this code never made it into the official lib. This is what it would look like if it had.)
bool writeDelimitedTo(
const google::protobuf::MessageLite& message,
google::protobuf::io::ZeroCopyOutputStream* rawOutput) {
// We create a new coded stream for each message. Don't worry, this is fast.
google::protobuf::io::CodedOutputStream output(rawOutput);
// Write the size.
const int size = message.ByteSize();
output.WriteVarint32(size);
uint8_t* buffer = output.GetDirectBufferForNBytesAndAdvance(size);
if (buffer != NULL) {
// Optimization: The message fits in one buffer, so use the faster
// direct-to-array serialization path.
message.SerializeWithCachedSizesToArray(buffer);
} else {
// Slightly-slower path when the message is multiple buffers.
message.SerializeWithCachedSizes(&output);
if (output.HadError()) return false;
}
return true;
}
bool readDelimitedFrom(
google::protobuf::io::ZeroCopyInputStream* rawInput,
google::protobuf::MessageLite* message) {
// We create a new coded stream for each message. Don't worry, this is fast,
// and it makes sure the 64MB total size limit is imposed per-message rather
// than on the whole stream. (See the CodedInputStream interface for more
// info on this limit.)
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size)) return false;
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
// Parse the message.
if (!message->MergeFromCodedStream(&input)) return false;
if (!input.ConsumedEntireMessage()) return false;
// Release the limit.
input.PopLimit(limit);
return true;
}
Okay, so I haven't been able to find top-level C++ functions implementing what I need, but some spelunking through the Java API reference turned up the following, inside the MessageLite interface:
void writeDelimitedTo(OutputStream output)
/* Like writeTo(OutputStream), but writes the size of
the message as a varint before writing the data. */
So the Java size prefix is a (Protocol Buffers) varint!
Armed with that information, I went digging through the C++ API and found the CodedStream header, which has these:
bool CodedInputStream::ReadVarint32(uint32 * value)
void CodedOutputStream::WriteVarint32(uint32 value)
Using those, I should be able to roll my own C++ functions that do the job.
They should really add this to the main Message API though; it's missing functionality considering Java has it, and so does Marc Gravell's excellent protobuf-net C# port (via SerializeWithLengthPrefix and DeserializeWithLengthPrefix).
I solved the same problem using CodedOutputStream/ArrayOutputStream to write the message (with the size) and CodedInputStream/ArrayInputStream to read the message (with the size).
For example, the following pseudo-code writes the message size following by the message:
const unsigned bufLength = 256;
unsigned char buffer[bufLength];
Message protoMessage;
google::protobuf::io::ArrayOutputStream arrayOutput(buffer, bufLength);
google::protobuf::io::CodedOutputStream codedOutput(&arrayOutput);
codedOutput.WriteLittleEndian32(protoMessage.ByteSize());
protoMessage.SerializeToCodedStream(&codedOutput);
When writing you should also check that your buffer is large enough to fit the message (including the size). And when reading, you should check that your buffer contains a whole message (including the size).
It definitely would be handy if they added convenience methods to C++ API similar to those provided by the Java API.
IsteamInputStream is very fragile to eofs and other errors that easily occurs when used together with std::istream. After this the protobuf streams are permamently damaged and any already used buffer data is destroyed. There are proper support for reading from traditional streams in protobuf.
Implement google::protobuf::io::CopyingInputStream and use that together with CopyingInputStreamAdapter. Do the same for the output variants.
In practice a parsing call ends up in google::protobuf::io::CopyingInputStream::Read(void* buffer, int size) where a buffer is given. The only thing left to do is read into it somehow.
Here's an example for use with Asio synchronized streams (SyncReadStream/SyncWriteStream):
#include <google/protobuf/io/zero_copy_stream_impl_lite.h>
using namespace google::protobuf::io;
template <typename SyncReadStream>
class AsioInputStream : public CopyingInputStream {
public:
AsioInputStream(SyncReadStream& sock);
int Read(void* buffer, int size);
private:
SyncReadStream& m_Socket;
};
template <typename SyncReadStream>
AsioInputStream<SyncReadStream>::AsioInputStream(SyncReadStream& sock) :
m_Socket(sock) {}
template <typename SyncReadStream>
int
AsioInputStream<SyncReadStream>::Read(void* buffer, int size)
{
std::size_t bytes_read;
boost::system::error_code ec;
bytes_read = m_Socket.read_some(boost::asio::buffer(buffer, size), ec);
if(!ec) {
return bytes_read;
} else if (ec == boost::asio::error::eof) {
return 0;
} else {
return -1;
}
}
template <typename SyncWriteStream>
class AsioOutputStream : public CopyingOutputStream {
public:
AsioOutputStream(SyncWriteStream& sock);
bool Write(const void* buffer, int size);
private:
SyncWriteStream& m_Socket;
};
template <typename SyncWriteStream>
AsioOutputStream<SyncWriteStream>::AsioOutputStream(SyncWriteStream& sock) :
m_Socket(sock) {}
template <typename SyncWriteStream>
bool
AsioOutputStream<SyncWriteStream>::Write(const void* buffer, int size)
{
boost::system::error_code ec;
m_Socket.write_some(boost::asio::buffer(buffer, size), ec);
return !ec;
}
Usage:
AsioInputStream<boost::asio::ip::tcp::socket> ais(m_Socket); // Where m_Socket is a instance of boost::asio::ip::tcp::socket
CopyingInputStreamAdaptor cis_adp(&ais);
CodedInputStream cis(&cis_adp);
Message protoMessage;
uint32_t msg_size;
/* Read message size */
if(!cis.ReadVarint32(&msg_size)) {
// Handle error
}
/* Make sure not to read beyond limit of message */
CodedInputStream::Limit msg_limit = cis.PushLimit(msg_size);
if(!msg.ParseFromCodedStream(&cis)) {
// Handle error
}
/* Remove limit */
cis.PopLimit(msg_limit);
Here you go:
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/io/coded_stream.h>
using namespace google::protobuf::io;
class FASWriter
{
std::ofstream mFs;
OstreamOutputStream *_OstreamOutputStream;
CodedOutputStream *_CodedOutputStream;
public:
FASWriter(const std::string &file) : mFs(file,std::ios::out | std::ios::binary)
{
assert(mFs.good());
_OstreamOutputStream = new OstreamOutputStream(&mFs);
_CodedOutputStream = new CodedOutputStream(_OstreamOutputStream);
}
inline void operator()(const ::google::protobuf::Message &msg)
{
_CodedOutputStream->WriteVarint32(msg.ByteSize());
if ( !msg.SerializeToCodedStream(_CodedOutputStream) )
std::cout << "SerializeToCodedStream error " << std::endl;
}
~FASWriter()
{
delete _CodedOutputStream;
delete _OstreamOutputStream;
mFs.close();
}
};
class FASReader
{
std::ifstream mFs;
IstreamInputStream *_IstreamInputStream;
CodedInputStream *_CodedInputStream;
public:
FASReader(const std::string &file), mFs(file,std::ios::in | std::ios::binary)
{
assert(mFs.good());
_IstreamInputStream = new IstreamInputStream(&mFs);
_CodedInputStream = new CodedInputStream(_IstreamInputStream);
}
template<class T>
bool ReadNext()
{
T msg;
unsigned __int32 size;
bool ret;
if ( ret = _CodedInputStream->ReadVarint32(&size) )
{
CodedInputStream::Limit msgLimit = _CodedInputStream->PushLimit(size);
if ( ret = msg.ParseFromCodedStream(_CodedInputStream) )
{
_CodedInputStream->PopLimit(msgLimit);
std::cout << mFeed << " FASReader ReadNext: " << msg.DebugString() << std::endl;
}
}
return ret;
}
~FASReader()
{
delete _CodedInputStream;
delete _IstreamInputStream;
mFs.close();
}
};
I ran into the same issue in both C++ and Python.
For the C++ version, I used a mix of the code Kenton Varda posted on this thread and the code from the pull request he sent to the protobuf team (because the version posted here doesn't handle EOF while the one he sent to github does).
#include <google/protobuf/message_lite.h>
#include <google/protobuf/io/zero_copy_stream.h>
#include <google/protobuf/io/coded_stream.h>
bool writeDelimitedTo(const google::protobuf::MessageLite& message,
google::protobuf::io::ZeroCopyOutputStream* rawOutput)
{
// We create a new coded stream for each message. Don't worry, this is fast.
google::protobuf::io::CodedOutputStream output(rawOutput);
// Write the size.
const int size = message.ByteSize();
output.WriteVarint32(size);
uint8_t* buffer = output.GetDirectBufferForNBytesAndAdvance(size);
if (buffer != NULL)
{
// Optimization: The message fits in one buffer, so use the faster
// direct-to-array serialization path.
message.SerializeWithCachedSizesToArray(buffer);
}
else
{
// Slightly-slower path when the message is multiple buffers.
message.SerializeWithCachedSizes(&output);
if (output.HadError())
return false;
}
return true;
}
bool readDelimitedFrom(google::protobuf::io::ZeroCopyInputStream* rawInput, google::protobuf::MessageLite* message, bool* clean_eof)
{
// We create a new coded stream for each message. Don't worry, this is fast,
// and it makes sure the 64MB total size limit is imposed per-message rather
// than on the whole stream. (See the CodedInputStream interface for more
// info on this limit.)
google::protobuf::io::CodedInputStream input(rawInput);
const int start = input.CurrentPosition();
if (clean_eof)
*clean_eof = false;
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size))
{
if (clean_eof)
*clean_eof = input.CurrentPosition() == start;
return false;
}
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit = input.PushLimit(size);
// Parse the message.
if (!message->MergeFromCodedStream(&input)) return false;
if (!input.ConsumedEntireMessage()) return false;
// Release the limit.
input.PopLimit(limit);
return true;
}
And here is my python2 implementation:
from google.protobuf.internal import encoder
from google.protobuf.internal import decoder
#I had to implement this because the tools in google.protobuf.internal.decoder
#read from a buffer, not from a file-like objcet
def readRawVarint32(stream):
mask = 0x80 # (1 << 7)
raw_varint32 = []
while 1:
b = stream.read(1)
#eof
if b == "":
break
raw_varint32.append(b)
if not (ord(b) & mask):
#we found a byte starting with a 0, which means it's the last byte of this varint
break
return raw_varint32
def writeDelimitedTo(message, stream):
message_str = message.SerializeToString()
delimiter = encoder._VarintBytes(len(message_str))
stream.write(delimiter + message_str)
def readDelimitedFrom(MessageType, stream):
raw_varint32 = readRawVarint32(stream)
message = None
if raw_varint32:
size, _ = decoder._DecodeVarint32(raw_varint32, 0)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message = MessageType()
message.ParseFromString(data)
return message
#In place version that takes an already built protobuf object
#In my tests, this is around 20% faster than the other version
#of readDelimitedFrom()
def readDelimitedFrom_inplace(message, stream):
raw_varint32 = readRawVarint32(stream)
if raw_varint32:
size, _ = decoder._DecodeVarint32(raw_varint32, 0)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message.ParseFromString(data)
return message
else:
return None
It might not be the best looking code and I'm sure it can be refactored a fair bit, but at least that should show you one way to do it.
Now the big problem: It's SLOW.
Even when using the C++ implementation of python-protobuf, it's one order of magnitude slower than in pure C++. I have a benchmark where I read 10M protobuf messages of ~30 bytes each from a file. It takes ~0.9s in C++, and 35s in python.
One way to make it a bit faster would be to re-implement the varint decoder to make it read from a file and decode in one go, instead of reading from a file and then decoding as this code currently does. (profiling shows that a significant amount of time is spent in the varint encoder/decoder). But needless to say that alone is not enough to close the gap between the python version and the C++ version.
Any idea to make it faster is very welcome :)
Just for completeness, I post here an up-to-date version that works with the master version of protobuf and Python3
For the C++ version it is sufficient to use the utils in delimited_message_utils.h, here a MWE
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/util/delimited_message_util.h>
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
template <typename T>
bool writeManyToFile(std::deque<T> messages, std::string filename) {
int outfd = open(filename.c_str(), O_WRONLY | O_CREAT | O_TRUNC);
google::protobuf::io::FileOutputStream fout(outfd);
bool success;
for (auto msg: messages) {
success = google::protobuf::util::SerializeDelimitedToZeroCopyStream(
msg, &fout);
if (! success) {
std::cout << "Writing Failed" << std::endl;
break;
}
}
fout.Close();
close(outfd);
return success;
}
template <typename T>
std::deque<T> readManyFromFile(std::string filename) {
int infd = open(filename.c_str(), O_RDONLY);
google::protobuf::io::FileInputStream fin(infd);
bool keep = true;
bool clean_eof = true;
std::deque<T> out;
while (keep) {
T msg;
keep = google::protobuf::util::ParseDelimitedFromZeroCopyStream(
&msg, &fin, nullptr);
if (keep)
out.push_back(msg);
}
fin.Close();
close(infd);
return out;
}
For the Python3 version, building on #fireboot 's answer, the only thing thing that needed modification is the decoding of raw_varint32
def getSize(raw_varint32):
result = 0
shift = 0
b = six.indexbytes(raw_varint32, 0)
result |= ((ord(b) & 0x7f) << shift)
return result
def readDelimitedFrom(MessageType, stream):
raw_varint32 = readRawVarint32(stream)
message = None
if raw_varint32:
size = getSize(raw_varint32)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message = MessageType()
message.ParseFromString(data)
return message
Was also looking for a solution for this. Here's the core of our solution, assuming some java code wrote many MyRecord messages with writeDelimitedTo into a file. Open the file and loop, doing:
if(someCodedInputStream->ReadVarint32(&bytes)) {
CodedInputStream::Limit msgLimit = someCodedInputStream->PushLimit(bytes);
if(myRecord->ParseFromCodedStream(someCodedInputStream)) {
//do your stuff with the parsed MyRecord instance
} else {
//handle parse error
}
someCodedInputStream->PopLimit(msgLimit);
} else {
//maybe end of file
}
Hope it helps.
Working with an objective-c version of protocol-buffers, I ran into this exact issue. On sending from the iOS client to a Java based server that uses parseDelimitedFrom, which expects the length as the first byte, I needed to call writeRawByte to the CodedOutputStream first. Posting here to hopegully help others that run into this issue. While working through this issue, one would think that Google proto-bufs would come with a simply flag which does this for you...
Request* request = [rBuild build];
[self sendMessage:request];
}
- (void) sendMessage:(Request *) request {
//** get length
NSData* n = [request data];
uint8_t len = [n length];
PBCodedOutputStream* os = [PBCodedOutputStream streamWithOutputStream:outputStream];
//** prepend it to message, such that Request.parseDelimitedFrom(in) can parse it properly
[os writeRawByte:len];
[request writeToCodedOutputStream:os];
[os flush];
}
Since I'm not allowed to write this as a comment to Kenton Varda's answer above; I believe there is a bug in the code he posted (as well as in other answers which have been provided). The following code:
...
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size)) return false;
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
...
sets an incorrect limit because it does not take into account the size of the varint32 which has already been read from input. This can result in data loss/corruption as additional bytes are read from the stream which may be part of the next message. The usual way of handling this correctly is to delete the CodedInputStream used to read the size and create a new one for reading the payload:
...
uint32_t size;
{
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
if (!input.ReadVarint32(&size)) return false;
}
google::protobuf::io::CodedInputStream input(rawInput);
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
...
You can use getline for reading a string from a stream, using the specified delimiter:
istream& getline ( istream& is, string& str, char delim );
(defined in the header)
I'm trying to generate a RSA Signature with libopenssl for c++:
But when I run my code, I get a SIGABRT. I did some deep debugging into libopenssl internal stuff to see where the Segfault comes from. I'll come to this later on.
First I want to make clear, that the RSA PrivateKey was successfully loaded from a .pem file. So Im pretty sure that's not the problem's origin.
So my question is: How to avoid the SIGABRT and what is the cause of it ?
I'm doing this for my B.Sc. Thesis so I really appreciate your help :)
Signature Generation Function:
DocumentSignature* RSASignatureGenerator::generateSignature(ContentHash* ch, CryptographicKey* pK) throw(PDVSException) {
OpenSSL_add_all_algorithms();
OpenSSL_add_all_ciphers();
OpenSSL_add_all_digests();
if(pK == nullptr)
throw MissingPrivateKeyException();
if(pK->getKeyType() != CryptographicKey::KeyType::RSA_PRIVATE || !dynamic_cast<RSAPrivateKey*>(pK))
throw KeyTypeMissmatchException(pK->getPem()->getPath().string(), "Generate RSA Signature");
//get msg to encrypt
const char* msg = ch->getStringHash().c_str();
//get openssl rsa key
RSA* rsaPK = dynamic_cast<RSAPrivateKey*>(pK)->createOpenSSLRSAKeyObject();
//create openssl signing context
EVP_MD_CTX* rsaSignCtx = EVP_MD_CTX_create();
EVP_PKEY* priKey = EVP_PKEY_new();
EVP_PKEY_assign_RSA(priKey, rsaPK);
//init ctxt
if (EVP_SignInit(rsaSignCtx, EVP_sha256()) <=0)
throw RSASignatureGenerationException();
//add data to sign
if (EVP_SignUpdate(rsaSignCtx, msg, std::strlen(msg)) <= 0) {
throw RSASignatureGenerationException();
}
//create result byte signature struct
DocumentSignature::ByteSignature* byteSig = new DocumentSignature::ByteSignature();
//set size to max possible
byteSig->size = EVP_MAX_MD_SIZE;
//alloc buffer memory
byteSig->data = (unsigned char*)malloc(byteSig->size);
//do signing
if (EVP_SignFinal(rsaSignCtx, byteSig->data, (unsigned int*) &byteSig->size, priKey) <= 0)
throw RSASignatureGenerationException();
DocumentSignature* res = new DocumentSignature(ch);
res->setByteSignature(byteSig);
EVP_MD_CTX_destroy(rsaSignCtx);
//TODO open SSL Memory leaks -> where to free open ssl stuff?!
return res;
}
RSA* rsaPK = dynamic_cast(pK)->createOpenSSLRSAKeyObject();
virtual RSA* createOpenSSLRSAKeyObject() throw (PDVSException) override {
RSA* rsa = NULL;
const char* c_string = _pem->getContent().c_str();
BIO * keybio = BIO_new_mem_buf((void*)c_string, -1);
if (keybio==NULL)
throw OpenSSLRSAPrivateKeyObjectCreationException(_pem->getPath());
rsa = PEM_read_bio_RSAPrivateKey(keybio, &rsa, NULL, NULL);
if(rsa == nullptr)
throw OpenSSLRSAPrivateKeyObjectCreationException(_pem->getPath());
//BIO_free(keybio);
return rsa;
}
SigAbrt origin in file openssl/crypto/mem.c
void CRYPTO_free(void *str, const char *file, int line)
{
if (free_impl != NULL && free_impl != &CRYPTO_free) {
free_impl(str, file, line);
return;
}
#ifndef OPENSSL_NO_CRYPTO_MDEBUG
if (call_malloc_debug) {
CRYPTO_mem_debug_free(str, 0, file, line);
free(str);
CRYPTO_mem_debug_free(str, 1, file, line);
} else {
free(str);
}
#else
free(str); // <<<<<<< HERE
#endif
}
the stacktrace
stacktrace screenshot from debugger (clion - gdb based)
I just found the Bug (and Im really not sure if this is a libopenssl bug..)
//set size to max possible
byteSig->size = EVP_MAX_MD_SIZE;
//alloc buffer memory
byteSig->data = (unsigned char*)malloc(byteSig->size);
The problem was when I set the buffer size to EVP_MAX_MD_SIZE!
The (in my opinion) very very strange thing is, that you have to keep the size uninitialized! (not even set to 0 - just "size_t size;" ).
Strange thing here is that then you also HAVE TO allocate memory just like I did. I dont understand this because then an undefined size of memory gets allocated..
What the really weird is that libopenssl internally sets the size back to 0 and allocates the memory itself.. (I detected this by browsing the libopenssl source code)
I need to do simple single-block AES encryption / decryption in my Qt / C++ application. This is a "keep the honest people honest" implementation, so just a basic encrypt(key, data) is necessary--I'm not worried about initialization vectors, etc. My input and key will always be exactly 16 bytes.
I'd really like to avoid another dependency to compile / link / ship with my application, so I'm trying to use what's available on each platform. On the Mac, this was a one-liner to CCCrypt. On Windows, I'm getting lost in the API from WinCrypt.h. Their example of encrypting a file is almost 600 lines long. Seriously?
I'm looking at CryptEncrypt, but I'm falling down the rabbit hole of dependencies you have to create before you can call that.
Can anyone provide a simple example of doing AES encryption using the Windows API? Surely there's a way to do this in a line or two. Assume you already have a 128-bit key and 128-bits of data to encrypt.
Here's the best I've been able to come up with. Suggestions for improvement are welcome!
static void encrypt(const QByteArray &data,
const QByteArray &key,
QByteArray *encrypted) {
// Create the crypto provider context.
HCRYPTPROV hProvider = NULL;
if (!CryptAcquireContext(&hProvider,
NULL, // pszContainer = no named container
NULL, // pszProvider = default provider
PROV_RSA_AES,
CRYPT_VERIFYCONTEXT)) {
throw std::runtime_error("Unable to create crypto provider context.");
}
// Construct the blob necessary for the key generation.
AesBlob128 aes_blob;
aes_blob.header.bType = PLAINTEXTKEYBLOB;
aes_blob.header.bVersion = CUR_BLOB_VERSION;
aes_blob.header.reserved = 0;
aes_blob.header.aiKeyAlg = CALG_AES_128;
aes_blob.key_length = kAesBytes128;
memcpy(aes_blob.key_bytes, key.constData(), kAesBytes128);
// Create the crypto key struct that Windows needs.
HCRYPTKEY hKey = NULL;
if (!CryptImportKey(hProvider,
reinterpret_cast<BYTE*>(&aes_blob),
sizeof(AesBlob128),
NULL, // hPubKey = not encrypted
0, // dwFlags
&hKey)) {
throw std::runtime_error("Unable to create crypto key.");
}
// The CryptEncrypt method uses the *same* buffer for both the input and
// output (!), so we copy the data to be encrypted into the output array.
// Also, for some reason, the AES-128 block cipher on Windows requires twice
// the block size in the output buffer. So we resize it to that length and
// then chop off the excess after we are done.
encrypted->clear();
encrypted->append(data);
encrypted->resize(kAesBytes128 * 2);
// This acts as both the length of bytes to be encoded (on input) and the
// number of bytes used in the resulting encrypted data (on output).
DWORD length = kAesBytes128;
if (!CryptEncrypt(hKey,
NULL, // hHash = no hash
true, // Final
0, // dwFlags
reinterpret_cast<BYTE*>(encrypted->data()),
&length,
encrypted->length())) {
throw std::runtime_error("Encryption failed");
}
// See comment above.
encrypted->chop(length - kAesBytes128);
CryptDestroyKey(hKey);
CryptReleaseContext(hProvider, 0);
}