I use Crypto++ library. I have a base64 string saved as CString. I want to convert my string to Integer. actually this base64 built from an Integer and now i want to convert to Integer again.but two Integer not equal.in the other words second Integer not equal with original Integer.
Base64Decoder bd;
CT2CA s(c);
std::string strStd(s);
bd.Put((byte*)strStd.data(), strStd.size());
bd.MessageEnd();
word64 size = bd.MaxRetrievable();
vector<byte> cypherVector(size);
string decoded;
if (size && size <= SIZE_MAX)
{
decoded.resize(size);
bd.Get((byte*)decoded.data(), decoded.size());
}
Integer cipherMessage((byte*)decoded.data(), decoded.size());
string decoded;
if (size && size <= SIZE_MAX)
{
decoded.resize(size);
bd.Get((byte*)decoded.data(), decoded.size());
}
You have a string called decoded, but you never actually decode the data by running it through a Base64Decoder.
Use something like the following. I don't have a MFC project handy to test, so I'm going to assume you converted the CString to a std::string.
// Converted from Unicode CString
std::string str;
StringSource source(str, true, new Base64Decoder);
Integer value(val, source.MaxRetrievable());
std::cout << std::hex << value << std::endl;
The StringSource is a BufferedTransformation. The Integer constructor you are using is:
Integer (BufferedTransformation &bt, size_t byteCount, Signedness sign=UNSIGNED, ByteOrder order=BIG_ENDIAN_ORDER)
In between the StringSource and the Integer is the Base64Decoder. its a filter that decodes the string on the fly. So data flows from the source (StringSource) to the sink (Integer constructor).
Also see Pipelines on the Crypto++ wiki.
Here is my solution to achieve this. It uses some Qt classes but it should be simple to replace them:
#include <QByteArray>
#include <QScopedArrayPointer>
#include <crypto++/base64.h>
#include <crypto++/rsa.h>
using namespace CryptoPP;
Integer convertBase64ToCryptoPpInt(const QByteArray &base64)
{
Base64Decoder decoder;
decoder.Put(reinterpret_cast<const byte*>(base64.data()), base64.size());
decoder.MessageEnd();
const word64 size = decoder.MaxRetrievable();
QScopedArrayPointer<byte> decoded{new byte[size]};
decoder.Get(decoded.data(), size);
return {decoded.data(), size};
}
QByteArray convertCryptoPpIntToBase64(const Integer &i)
{
// Copy content of i into byte array
const unsigned iLen = i.ByteCount();
QScopedArrayPointer<byte> idata{new byte[iLen]};
i.Encode(idata.data(), iLen);
// Encode data
Base64Encoder encoder;
encoder.Put(idata.data(), iLen);
encoder.MessageEnd();
const int encodedSize = encoder.MaxRetrievable();
QScopedArrayPointer<byte> encoded{new byte[encodedSize]};
encoder.Get(encoded.data(), encodedSize);
return {reinterpret_cast<char*>(encoded.data()), encodedSize};
}
It may be much more compact using CryptoPP's pipelining but i didn't find out how to stream from and to a CryptoPP::Integer.
Related
I'm working with 10+ years old machines which use ISO 8859-7 to represent Greek characters using a single byte each.
I need to catch those characters and convert them to UTF-8 in order to inject them in a JSON to be sent via HTTPS.
Also, I'm using GCC v4.4.7 and I don't feel like upgrading so I can't use codeconv or such.
Example: "OΛΑ":
I get char values [ 0xcf, 0xcb, 0xc1, ], I need to write this string "\u039F\u039B\u0391".
PS: I'm not a charset expert so please avoid philosophical answers like "ISO 8859 is a subset of Unicode so you just need to implement the algorithm".
Given that there are so few values to map, a simple solution is to use a lookup table.
Pseudocode:
id_offset = 0x80 // 0x00 .. 0x7F same in UTF-8
c1_offset = 0x20 // 0x80 .. 0x9F control characters
table_offset = id_offset + c1_offset
table = [
u8"\u00A0", // 0xA0
u8"‘", // 0xA1
u8"’",
u8"£",
u8"€",
u8"₯",
// ... Refer to ISO 8859-7 for full list of characters.
]
let S be the input string
let O be an empty output string
for each char C in S
reinterpret C as unsigned char U
if U less than id_offset // same in both encodings
append C to O
else if U less than table_offset // control code
append char '\xC2' to O // lead byte
append char C to O
else
append string table[U - table_offset] to O
All that said, I recommend to save some time by using a library instead.
One way could be to use the Posix libiconv library. On Linux, the functions needed (iconv_open, iconv and iconv_close) are even included in libc so no extra linkage is needed there. On your old machines you may need to install libiconv but I doubt it.
Converting may be as simple as this:
#include <iconv.h>
#include <cerrno>
#include <cstring>
#include <iostream>
#include <iterator>
#include <stdexcept>
#include <string>
// A wrapper for the iconv functions
class Conv {
public:
// Open a conversion descriptor for the two selected character sets
Conv(const char* to, const char* from) : cd(iconv_open(to, from)) {
if(cd == reinterpret_cast<iconv_t>(-1))
throw std::runtime_error(std::strerror(errno));
}
Conv(const Conv&) = delete;
~Conv() { iconv_close(cd); }
// the actual conversion function
std::string convert(const std::string& in) {
const char* inbuf = in.c_str();
size_t inbytesleft = in.size();
// make the "out" buffer big to fit whatever we throw at it and set pointers
std::string out(inbytesleft * 6, '\0');
char* outbuf = out.data();
size_t outbytesleft = out.size();
// the const_cast shouldn't be needed but my "iconv" function declares it
// "char**" not "const char**"
size_t non_rev_converted = iconv(cd, const_cast<char**>(&inbuf),
&inbytesleft, &outbuf, &outbytesleft);
if(non_rev_converted == static_cast<size_t>(-1)) {
// here you can add misc handling like replacing erroneous chars
// and continue converting etc.
// I'll just throw...
throw std::runtime_error(std::strerror(errno));
}
// shrink to keep only what we converted
out.resize(outbuf - out.data());
return out;
}
private:
iconv_t cd;
};
int main() {
Conv cvt("UTF-8", "ISO-8859-7");
// create a string from the ISO-8859-7 data
unsigned char data[]{0xcf, 0xcb, 0xc1};
std::string iso88597_str(std::begin(data), std::end(data));
auto utf8 = cvt.convert(iso88597_str);
std::cout << utf8 << '\n';
}
Output (in UTF-8):
ΟΛΑ
Using this you can create a mapping table, from ISO-8859-7 to UTF-8, that you include in your project instead of iconv:
Demo
Ok I decided to do this myself instead of looking for a compatible library. Here's how I did.
The main problem was figuring out how to fill the two bytes for Unicode using the single one for ISO, so I used the debugger to read the value for the same character, first written by the old machine and then written with a constant string (UTF-8 by default). I started with "O" and "Π" and saw that in UTF-8 the first byte was always 0xCE while the second one was filled with the ISO value plus an offset (-0x30). I built the following code to implement this and used a test string filled with all greek letters, both upper and lower case. Then I realised that starting from "π" (0xF0 in ISO) both the first byte and the offset for the second one changed, so I added a test to figure out which of the two rules to apply. The following method returns a bool to let the caller know whether the original string contained ISO characters (useful for other purposes) and overwrites the original string, passed as reference, with the new one. I worked with char arrays instead of strings for coherence with the rest of the project which is basically a C project written in C++.
bool iso_to_utf8(char* in){
bool wasISO=false;
if(in == NULL)
return wasISO;
// count chars
int i=strlen(in);
if(!i)
return wasISO;
// create and size new buffer
char *out = new char[2*i];
// fill with 0's, useful for watching the string as it gets built
memset(out, 0, 2*i);
// ready to start from head of old buffer
i=0;
// index for new buffer
int j=0;
// for each char in old buffer
while(in[i]!='\0'){
if(in[i] >= 0){
// it's already utf8-compliant, take it as it is
out[j++] = in[i];
}else{
// it's ISO
wasISO=true;
// get plain value
int val = in[i] & 0xFF;
// first byte to CF or CE
out[j++]= val > 0xEF ? 0xCF : 0xCE;
// second char to plain value normalized
out[j++] = val - (val > 0xEF ? 0x70 : 0x30);
}
i++;
}
// add string terminator
out[j]='\0';
// paste into old char array
strcpy(in, out);
return wasISO;
}
I need to get the Blowfish encryption with OpenSSL library. But something does not work.
What am I doing wrong? I'm trying to do it this way:
#include <iostream>
#include <openssl/blowfish.h>
#include "OpenSSL_Base64.h"
#include "Base64.h"
using namespace std;
int main()
{
unsigned char ciphertext[BF_BLOCK];
unsigned char plaintext[BF_BLOCK];
// blowfish key
const unsigned char *key = (const unsigned char*)"topsecret";
//unsigned char key_data[10] = "topsecret";
BF_KEY bfKey;
BF_set_key(&bfKey, 10, key);
/* Open SSL's Blowfish ECB encrypt/decrypt function only handles 8 bytes of data */
char a_str[] = "8 Bytes";//{8, ,B,y,t,e,s,\0}
char *arr_ptr = &a_str[0];
//unsigned char* data_to_encrypt = (unsigned char*)"8 Bytes"; // 7 + \0
BF_ecb_encrypt((unsigned char*)arr_ptr, ciphertext, &bfKey, BF_ENCRYPT);
unsigned char* ret = new unsigned char[BF_BLOCK + 1];
strcpy((char*)ret, (char*)ciphertext);
ret[BF_BLOCK + 1] = '\0';
char* base_enc = OpenSSL_Base64::Base64Encode((char*)ret, strlen((char*)ret));
cout << base_enc << endl;
cin.get();
return 0;
}
But I get the wrong output:
fy7maf+FhmbM
I checked with it:
http://sladex.org/blowfish.js/
It should be: fEcC5/EKDVY=
Base64:
http://pastebin.com/wNLZQxQT
The problem is that ret may contain a null byte, encryption is 8-bit byte based, not character based and will contain values fromthe full range 0-255. strlen will terminate on the first null byte it finds giving a length that is smaller then the full length of the encrypted data.
Note: When using encryption pay strice attention to providing the exact correct length parameters and data, do not rely on padding. (The exception is input data to encryption functions that support data padding such as PKCS#7 (née PKCS#5) padding.
I study C++ independently and I have one problem, which I can't solve more than week. I hope you can help me.
I need to get a SHA1 digest of a Unicode string (like Привет), but I don't know how to do that.
I tried to do it like this, but it returns a wrong digest!
For wstring('Ы')
It returns - A469A61DF29A7568A6CC63318EA8741FA1CF2A7
I need - 8dbe718ab1e0c4d75f7ab50fc9a53ec4f0528373
Regards and sorry for my English :).
CryptoPP 5.6.2
MVC++ 2013
#include <iostream>
#include "cryptopp562\cryptlib.h"
#include "cryptopp562\sha.h"
#include "cryptopp562\hex.h"
int main() {
std::wstring string(L"Ы");
int bs_size = (int)string.length() * sizeof(wchar_t);
byte* bytes_string = new byte[bs_size];
int n = 0; //real bytes count
for (int i = 0; i < string.length(); i++) {
wchar_t wcharacter = string[i];
int high_byte = wcharacter & 0xFF00;
high_byte = high_byte >> 8;
int low_byte = wcharacter & 0xFF;
if (high_byte != 0) {
bytes_string[n++] = (byte)high_byte;
}
bytes_string[n++] = (byte)low_byte;
}
CryptoPP::SHA1 sha1;
std::string hash;
CryptoPP::StringSource ss(bytes_string, n, true,
new CryptoPP::HashFilter(sha1,
new CryptoPP::HexEncoder(
new CryptoPP::StringSink(hash)
)
)
);
std::cout << hash << std::endl;
return 0;
}
I need to get a SHA1 digest of a Unicode string (like Привет), but I don't know how to do that.
The trick here is you need to know how to encode the Unicode string. On Windows, a wchar_t is 2 octets; while on Linux a wchar_t is 4 otects. There's a Crypto++ wiki page on it at Character Set Considerations, but its not that good.
To interoperate most effectively, always use UTF-8. That means you convert UTF-16 or UTF-32 to UTF-8. Because you are on Windows, you will want to call WideCharToMultiByte function to convert it using CP_UTF8. If you were on Linux, then you would use libiconv.
Crypto++ has a built-in function called StringNarrow that uses C++. Its in the file misc.h. Be sure to call setlocale before using it.
Stack Overflow has a few question on using the Windows function . See, for example, How do you properly use WideCharToMultiByte.
I need - 8dbe718ab1e0c4d75f7ab50fc9a53ec4f0528373
What is the hash (SHA-1, SHA-256, ...)? Is it a HMAC (keyed hash)? Is the information salted (like a password in storage)? How is it encoded? I have to ask because I cannot reproduce your desired results:
SHA-1: 2805AE8E7E12F182135F92FB90843BB1080D3BE8
SHA-224: 891CFB544EB6F3C212190705F7229D91DB6CECD4718EA65E0FA1B112
SHA-256: DD679C0B9FD408A04148AA7D30C9DF393F67B7227F65693FFFE0ED6D0F0ADE59
SHA-384: 0D83489095F455E4EF5186F2B071AB28E0D06132ABC9050B683DA28A463697AD
1195FF77F050F20AFBD3D5101DF18C0D
SHA-512: 0F9F88EE4FA40D2135F98B839F601F227B4710F00C8BC48FDE78FF3333BD17E4
1D80AF9FE6FD68515A5F5F91E83E87DE3C33F899661066B638DB505C9CC0153D
Here's the program I used. Be sure to specify the length of the wide string. If you don't (and use -1 for the length), then WideCharToMultiByte will include the terminating ASCII-Z in its calculations. Since we are using a std::string, we don't need the function to include the ASCII-Z terminator.
int main(int argc, char* argv[])
{
wstring m1 = L"Привет"; string m2;
int req = WideCharToMultiByte(CP_UTF8, 0, m1.c_str(), (int)m1.length(), NULL, 0, NULL, NULL);
if(req < 0 || req == 0)
throw runtime_error("Failed to convert string");
m2.resize((size_t)req);
int cch = WideCharToMultiByte(CP_UTF8, 0, m1.c_str(), (int)m1.length(), &m2[0], (int)m2.length(), NULL, NULL);
if(cch < 0 || cch == 0)
throw runtime_error("Failed to convert string");
// Should not be required
m2.resize((size_t)cch);
string s1, s2, s3, s4, s5;
SHA1 sha1; SHA224 sha224; SHA256 sha256; SHA384 sha384; SHA512 sha512;
HashFilter f1(sha1, new HexEncoder(new StringSink(s1)));
HashFilter f2(sha224, new HexEncoder(new StringSink(s2)));
HashFilter f3(sha256, new HexEncoder(new StringSink(s3)));
HashFilter f4(sha384, new HexEncoder(new StringSink(s4)));
HashFilter f5(sha512, new HexEncoder(new StringSink(s5)));
ChannelSwitch cs;
cs.AddDefaultRoute(f1);
cs.AddDefaultRoute(f2);
cs.AddDefaultRoute(f3);
cs.AddDefaultRoute(f4);
cs.AddDefaultRoute(f5);
StringSource ss(m2, true /*pumpAll*/, new Redirector(cs));
cout << "SHA-1: " << s1 << endl;
cout << "SHA-224: " << s2 << endl;
cout << "SHA-256: " << s3 << endl;
cout << "SHA-384: " << s4 << endl;
cout << "SHA-512: " << s5 << endl;
return 0;
}
You say ‘but it returns wrong digest’ – what are you comparing it with?
Key point: digests such as SHA-1 don't work with sequences of characters, but with sequences of bytes.
What you're doing in this snippet of code is generating an ad-hoc encoding of the unicode characters in the string "Ы". This encoding will (as it turns out) match the UTF-16 encoding if the characters in the string are all in the BMP (‘basic multilingual plane’, which is true in this case) and if the numbers that end up in wcharacter are integers representing unicode codepoints (which is sort-of probably correct, but not, I think, guaranteed).
If the digest you're comparing it with turns an input string into an sequence of bytes using the UTF-8 encoding (which is quite likely), then that will produce a different byte sequence from yours, so that the SHA-1 digest of that sequence will be different from the digest you calculate here.
So:
Check what encoding your test string is using.
You'd be best off using some library functions to specifically generate a UTF-16 or UTF-8 (as appropriate) encoding of the string you want to process, to ensure that the byte sequence you're working with is what you think it is.
There's an excellent introduction to unicode and encodings in the aptly-named document The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
This seems to work fine for me.
Rather than fiddling about trying to extract the pieces I simply cast the wide character buffer to a const byte* and pass that (and the adjusted size) to the hash function.
int main() {
std::wstring string(L"Привет");
CryptoPP::SHA1 sha1;
std::string hash;
CryptoPP::StringSource ss(
reinterpret_cast<const byte*>(string.c_str()), // cast to const byte*
string.size() * sizeof(std::wstring::value_type), // adjust for size
true,
new CryptoPP::HashFilter(sha1,
new CryptoPP::HexEncoder(
new CryptoPP::StringSink(hash)
)
)
);
std::cout << hash << std::endl;
return 0;
}
Output:
C6F8291E68E478DD5BD1BC2EC2A7B7FC0CEE1420
EDIT: To add.
The result is going to be encoding dependant. For example I ran this on Linux where wchar_t is 4 bytes. On Windows I believe wchar_t may be only 2 bytes.
For consistency it may be better to use UTF8 a store the text in a normal std::string. This also makes calling the API simpler:
int main() {
std::string string("Привет"); // UTF-8 encoded
CryptoPP::SHA1 sha1;
std::string hash;
CryptoPP::StringSource ss(
string,
true,
new CryptoPP::HashFilter(sha1,
new CryptoPP::HexEncoder(
new CryptoPP::StringSink(hash)
)
)
);
std::cout << hash << std::endl;
return 0;
}
Output:
2805AE8E7E12F182135F92FB90843BB1080D3BE8
I am new to cryptopp and have been struggling for a while with the creation of private keys for ECDSA signing.
I have a hex encoded private exponent E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB. This is stored as a string.
I want to use this to sign a text block using ECDSA. My code looks a bit like this
string Sig::genSignature(const string& privKeyIn, const string& messageIn)
{
AutoSeededRandomPool prng;
ECDSA<ECP, SHA256>::PrivateKey privateKey;
privateKey.AccessGroupParameters().Initialize(ASN1::secp256r1());
privateKey.Load(StringSource(privKeyIn, true, NULL).Ref());
ECDSA<ECP, SHA256>::Signer signer(privateKey);
// Determine maximum size, allocate a string with that size
size_t siglen = signer.MaxSignatureLength();
string signature(siglen, 0x00);
// Sign, and trim signature to actual size
siglen = signer.SignMessage(prng, (const byte *) messageIn.data(), (size_t) messageIn.length(), (byte*)signature.data());
signature.resize(siglen);
cout << signature.data() << endl;
return signature;
}
This code generates the following error in Visual studio on the when I try to do privateKey.load(...)
First-chance exception at 0x7693C42D in DLLTest.exe: Microsoft C++ exception: CryptoPP::BERDecodeErr at memory location 0x0033EEA8.
Unhandled exception at 0x7693C42D in DLLTest.exe: Microsoft C++ exception: CryptoPP::BERDecodeErr at memory location 0x0033EEA8.
I am guessing I am doing something a bit stupid... any help would be great???
PS I had a similar issue using ECDH for GMAC generation but got round this by saving the key as a SECByteBlock but this 'trick' doesnt seem to work in this case.
DLLTest.exe: Microsoft C++ exception: CryptoPP::BERDecodeErr ...
You have a private exponent, and not a private key. So you should not call Load on it. That's causing the Crypto++ BERDecodeErr exception.
The answer is detailed on the ECDSA wiki page, but its not readily apparent. You need to perform the following to initialize the privateKey given the curve and exponent::
string exp = "E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB";
exp.insert(0, "0x");
Integer x(exp.c_str());
privateKey.Initialize(ASN1::secp256r1(), x);
Prepending the "0x" ensures the Integer class will parse the ASCII string correctly. You can also append a "h" character to the string. You can see the parsing code for Integer class at Integer.cpp around line 2960 in the StringToInteger function.
Here's another way to do the same thing:
string exp = "E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB";
HexDecoder decoder;
decoder.Put((byte*)exp.data(), exp.size());
decoder.MessageEnd();
Integer x;
x.Decode(decoder, decoder.MaxRetrievable());
privateKey.Initialize(ASN1::secp256r1(), x);
The HexDecoder will perform the ASCII to binary conversion for you. The buffer held by the HexDecoder will then be consumed by the Integer using its Decode (BufferedTransformation &bt, size_t inputLen, Signedness=UNSIGNED) method.
And here is another way using HexDecoder (Crypto++ is as bad as scripting languages at times :)...
string exp = "E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB";
StringSource ss(exp, true /*punpAll*/, new HexDecoder);
Integer x;
x.Decode(ss, ss.MaxRetrievable());
privateKey.Initialize(ASN1::secp256r1(), x);
After initializing the key, you should validate it:
bool result = privateKey.Validate( prng, 3 );
if( !result ) { /* Handle error */ }
This will output binary data:
cout << signature.data() << endl;
If you want something printable/readable, run it though a Crypto++ HexEncoder.
for others looking for this later
string genSignature(const string& privKeyIn, const string& messageIn)
{
CryptoPP::Integer secretNumber(genSecretNumber(privKeyIn, messageIn));
AutoSeededRandomPool secretNumberGenerator;
if (encryptBase::debug)
{
cout << "secret number: " << secretNumber << endl;
}
SecByteBlock message(convertHexStrToSecByteBlock(messageIn));
ECDSA<ECP, SHA256>::PrivateKey privateKey;
string exp(privKeyIn);
exp.insert(0, "0x");
Integer x(exp.c_str());
privateKey.Initialize(ASN1::secp256r1(), x);
AutoSeededRandomPool prng;
if (!privateKey.Validate(prng, 3))
{
cout << "unable to verify key" << endl;
return "failed to verify key";
}
ECDSA<ECP, SHA256>::Signer signer(privateKey);
size_t siglen = signer.MaxSignatureLength();
string signature(siglen, 0x00);
siglen = signer.SignMessage(secretNumberGenerator, message.BytePtr(), message.size(), (byte*)signature.data());
signature.resize(siglen);
string encoded;
HexEncoder encoder;
encoder.Put((byte *) signature.data(), signature.size());
encoder.MessageEnd();
word64 size = encoder.MaxRetrievable();
if (size)
{
encoded.resize(size);
encoder.Get((byte*)encoded.data(), encoded.size());
}
return encoded;
}
I have string of hexadecimals which I need to convert to const byte*. I am using Crypto++ to do hashing and it needs the key to be in const byte* Is there any way i can convert the string of hexadecimal into const byte* using any of the Crypto++ libs or must i need to come up with my own?
There is a HexDecoder class in Crypto++.
You need to feed this characters. It seems that Crypto++ does not directly distinguish between characters and bytes. So the following line of code supplied by varren will work:
string destination;
StringSource ss(source, true, new HexDecoder(new StringSink(destination)));
const byte* result = (const byte*) destination.data();
I have string of hexadecimals which I need to convert to const byte*
...
But it will be in string. I need it to be in byte*
You should use a HexDecoder and ArraySink then. Something like:
string encoded = "FFEEDDCCBBAA99887766554433221100";
ASSERT(encoded.length() % 2 == 0);
size_t length = encoded.length() / 2;
unique_ptr<byte[]> decoded(new byte[length]);
StringSource ss(encoded, true /*pumpAll*/, new ArraySink(decoded.get(), length));
You can then use the byte array decoded.get() as a byte*.
You can also use a vector<byte>. In this case, the byte* is &v[0]. Something like:
string encoded = "FFEEDDCCBBAA99887766554433221100";
ASSERT(encoded.length() % 2 == 0);
size_t length = encoded.length() / 2;
vector<byte> decoded;
decoded.resize(length);
StringSource ss(encoded, true /*pumpAll*/, new ArraySink(&decoded[0], length));
(comment) But it will be in string. I need it to be in byte*
This is even easier:
string encoded = "FFEEDDCCBBAA99887766554433221100";
string decoded;
StringSource ss(encoded, true /*pumpAll*/, new StringSink(decoded));
const byte* data = reinterpret_cast<const byte*>(decoded.data());
If you want the non-const version then use:
byte* ptr = reinterpret_cast<byte*>(&decoded[0]);
// HEX to BIN using CryptoPP
string encoded = "FFEEDDCCBBAA99887766554433221100";
size_t length = encoded.length() / 2;
vector<byte> decoded;
decoded.resize(length);
StringSource ss(encoded, true, new HexDecoder(new ArraySink(&decoded[0], length)));