I was using the Crypto++ library to do my arc4 encryption. Reference from here but not fully explained: http://www.cryptopp.com/wiki/Stream_Cipher.
The following is my code:
string key = "key";
string msg = "hello";
ARC4 arc4((byte*)key.c_str(), sizeof((byte*)key.c_str()));
arc4.ProcessData((byte*)msg.c_str(), (byte*)msg.c_str(), sizeof((byte*)msg.c_str()));
arc4.ProcessData((byte*)msg.c_str(), (byte*)msg.c_str(), sizeof((byte*)msg.c_str()));
cout << msg << endl;
My message after encrypt and decrypt which is totally garbage then i could not read.
not decrypted back to "hello" in short.
So how can I encrypt and decrypt message with key as above?
Two problems. First, you need to use the string's size(), and not sizeof(). Second, you need to reset the arc4 object when decrypting. Otherwise, the cipher's state is continued from the previous encryption.
string key = "key";
string msg = "hello";
ARC4 arc4((byte*)key.data(), key.size());
arc4.ProcessData((byte*)msg.data(), (byte*)msg.data(), msg.size());
// Reset state
arc4.SetKey((byte*)key.data(), key.size());
arc4.ProcessData((byte*)msg.data(), (byte*)msg.data(), msg.size());
cout << msg << endl;
Related
I am using Crypto++ library for cryptography related works. And sub-part of task is to encrypt and decrypt a text. The message can be up to 256 character long containing alphanumeric number spaces dot and special characters.
This piece of code is working for text length is less than or equal to 8. But after that it fails to decrypt the encrypted text.
// g++ -std=c++1y crypto.cpp -I /home/shravan40/cryptopp/build -lcryptopp
#include <iostream>
#include <cryptopp/rsa.h>
#include <cryptopp/integer.h>
#include <cryptopp/osrng.h>
int main(){
// Keys
CryptoPP::Integer n("0xbeaadb3d839f3b5f"), e("0x11"), d("0x21a5ae37b9959db9");
CryptoPP::RSA::PrivateKey privKey;
privKey.Initialize(n, e, d);
CryptoPP::RSA::PublicKey pubKey;
pubKey.Initialize(n, e);
// Encryption
std::string message = "Shravan Kumar";
CryptoPP::Integer m((const byte *)message.data(), message.size());
std::cout << "m: " << m << std::endl;
// m: 126879297332596.
CryptoPP::Integer c = pubKey.ApplyFunction(m);
std::cout << "c: " << std::hex << c << std::endl;
// c: 3f47c32e8e17e291h
// Decryption
CryptoPP::AutoSeededRandomPool prng;
CryptoPP::Integer r = privKey.CalculateInverse(prng, c);
std::cout << "r: " << std::hex << r << std::endl;
// r: 736563726574h
std::string recovered;
recovered.resize(r.MinEncodedSize());
r.Encode((byte *)recovered.data(), recovered.size());
std::cout << "recovered: " << recovered << std::endl;
// recovered: Expected : (Shravan Kumar), Received -> y5��dqm0
return 0;
}
Richard Critten is correct in his comment that usually hybrid encryption is used (an asymmetric cipher such as RSA with a symmetric cipher such as AES).
For these kind of insecure examples though you are usually simply required to split up the plaintext into parts the same size as the modulus n. So in your case just put every 8 bytes / characters together and use it for a (big endian) number. As the input seems to be ASCII the highest bit of those 8 bytes will always be set to zero, so you should not have any problems encrypting/decrypting. The input for RSA must of course always be less than n.
You may of course have to think up a smart way of handling the last part of the string.
Notes:
In case it hasn't been told (yet): raw RSA without padding is not secure either. So it is not just the key size that would be a problem if this example would be implemented in the field.
I haven't got a clue what you're doing with regard to decryption. You should have a look at your textbook again I suppose.
Integer m((const byte *)message.data(), message.size());
If you use message.size()+1, then the message will include the trailing NULL. You can use it during decryption to determine where the recovered string ends. Otherwise, you are going to need to track length of the message.
You might also be interested in Raw RSA from the Crypto++ wiki. But as Maarten cautined, its tricky to get right and build into a scheme.
You might consider something using RSA encryption with OAEP or PKCS v1.5 padding. Also see RSA Encryption Schemes on the Crypto++ wiki.
I believe this is undefined behavior:
std::string recovered;
recovered.resize(r.MinEncodedSize());
r.Encode((byte *)recovered.data(), recovered.size());
I think you need to use &recovered[0] to get the non-const pointer. It may be causing your problem.
I'm having an issue with allocating a new key for 3 key Triple DES in crypto++.
I've generated a new key as a string but need to allocate it to SecByteBlock for use in Crypto++.
Currently I generate a random key using the PRNG at the start, but when I attempt to change the key using string output from DES_EDE3, it appears to use the same key.
I think the issue is with the conversion between string and SecByteBlock, or the allocation to SecByteBlock as shown below.
Any help would be greatly appreciated!
SecByteBlock GENERATOR::setKey(string keyString){
SecByteBlock replacementKey(24);
replacementKey= SecByteBlock(reinterpret_cast<const byte*>(keyString.data()), keyString.size());
return newKey = replacementKey;
}
I attempt to change the key using string output from DES_EDE3, it appears to use the same key
It almost sounds like you are trying to use 3-DES as a PRF keyed with a password. If so, use HKDF. Its designed for these types of expand-then extract operations.
HKDF is available in Crypto++ 5.6.3 and above. If you need it for a downlevel client, then copy the header where you need it.
SecByteBlock GENERATOR::setKey(string keyString){
SecByteBlock replacementKey(24);
replacementKey= SecByteBlock(reinterpret_cast<const byte*>(keyString.data()), keyString.size());
return newKey = replacementKey;
}
Though you size replacementKey to 24, it could be resized by the assignment replacementKey= SecByteBlock(...).
You might want to try the following:
SecByteBlock GENERATOR::setKey(const string& keyString)
{
SecByteBlock key((const byte*)keyString.data(), keyString.size());
if(key,size() < DES_EDE3::KEYLENGTH)
key.CleanGrow(DES_EDE3::KEYLENGTH);
else
key.resize(DES_EDE3::KEYLENGTH);
return key;
}
CleanGrow sizes the memory block to DES_EDE3::KEYLENGTH and backfills the block with 0's as needed. resize will truncate to DES_EDE3::KEYLENGTH if its too large.
You could also do something like:
SecByteBlock key(DES_EDE3::KEYLENGTH);
size_t s = STDMIN(key.size(), keyString.size());
memcpy(key.data(), keyString.data(), s);
if(s < DES_EDE3::KEYLENGTH)
memset(key.data()+s, 0, DES_EDE3::KEYLENGTH-s);
-----
To combine the first two, you might consider this:
SecByteBlock GENERATOR::setKey(const string& keyString)
{
// Block is unintialized
SecByteBlock key(DES_EDE3::KEYLENGTH);
HKDF<SHA256> kdf;
kdf.Derivekey(key.data(), key.size(), (const byte*)keyString.data(), keyString.size(), NULL, 0);
return key;
}
-----
You can output a SecByteBlock with code like:
SecByteBlock b = GENERATOR::setKey(...);
...
cout << "Derived key: "
ArraySource as(b.data(), b.size(), true, new HexEncoder(new FileSink(cout)));
cout << endl;
The following with encode it using Base64:
ArraySource as(b.data(), b.size(), true, new Base64Encoder(new FileSink(cout)));
I study C++ independently and I have one problem, which I can't solve more than week. I hope you can help me.
I need to get a SHA1 digest of a Unicode string (like Привет), but I don't know how to do that.
I tried to do it like this, but it returns a wrong digest!
For wstring('Ы')
It returns - A469A61DF29A7568A6CC63318EA8741FA1CF2A7
I need - 8dbe718ab1e0c4d75f7ab50fc9a53ec4f0528373
Regards and sorry for my English :).
CryptoPP 5.6.2
MVC++ 2013
#include <iostream>
#include "cryptopp562\cryptlib.h"
#include "cryptopp562\sha.h"
#include "cryptopp562\hex.h"
int main() {
std::wstring string(L"Ы");
int bs_size = (int)string.length() * sizeof(wchar_t);
byte* bytes_string = new byte[bs_size];
int n = 0; //real bytes count
for (int i = 0; i < string.length(); i++) {
wchar_t wcharacter = string[i];
int high_byte = wcharacter & 0xFF00;
high_byte = high_byte >> 8;
int low_byte = wcharacter & 0xFF;
if (high_byte != 0) {
bytes_string[n++] = (byte)high_byte;
}
bytes_string[n++] = (byte)low_byte;
}
CryptoPP::SHA1 sha1;
std::string hash;
CryptoPP::StringSource ss(bytes_string, n, true,
new CryptoPP::HashFilter(sha1,
new CryptoPP::HexEncoder(
new CryptoPP::StringSink(hash)
)
)
);
std::cout << hash << std::endl;
return 0;
}
I need to get a SHA1 digest of a Unicode string (like Привет), but I don't know how to do that.
The trick here is you need to know how to encode the Unicode string. On Windows, a wchar_t is 2 octets; while on Linux a wchar_t is 4 otects. There's a Crypto++ wiki page on it at Character Set Considerations, but its not that good.
To interoperate most effectively, always use UTF-8. That means you convert UTF-16 or UTF-32 to UTF-8. Because you are on Windows, you will want to call WideCharToMultiByte function to convert it using CP_UTF8. If you were on Linux, then you would use libiconv.
Crypto++ has a built-in function called StringNarrow that uses C++. Its in the file misc.h. Be sure to call setlocale before using it.
Stack Overflow has a few question on using the Windows function . See, for example, How do you properly use WideCharToMultiByte.
I need - 8dbe718ab1e0c4d75f7ab50fc9a53ec4f0528373
What is the hash (SHA-1, SHA-256, ...)? Is it a HMAC (keyed hash)? Is the information salted (like a password in storage)? How is it encoded? I have to ask because I cannot reproduce your desired results:
SHA-1: 2805AE8E7E12F182135F92FB90843BB1080D3BE8
SHA-224: 891CFB544EB6F3C212190705F7229D91DB6CECD4718EA65E0FA1B112
SHA-256: DD679C0B9FD408A04148AA7D30C9DF393F67B7227F65693FFFE0ED6D0F0ADE59
SHA-384: 0D83489095F455E4EF5186F2B071AB28E0D06132ABC9050B683DA28A463697AD
1195FF77F050F20AFBD3D5101DF18C0D
SHA-512: 0F9F88EE4FA40D2135F98B839F601F227B4710F00C8BC48FDE78FF3333BD17E4
1D80AF9FE6FD68515A5F5F91E83E87DE3C33F899661066B638DB505C9CC0153D
Here's the program I used. Be sure to specify the length of the wide string. If you don't (and use -1 for the length), then WideCharToMultiByte will include the terminating ASCII-Z in its calculations. Since we are using a std::string, we don't need the function to include the ASCII-Z terminator.
int main(int argc, char* argv[])
{
wstring m1 = L"Привет"; string m2;
int req = WideCharToMultiByte(CP_UTF8, 0, m1.c_str(), (int)m1.length(), NULL, 0, NULL, NULL);
if(req < 0 || req == 0)
throw runtime_error("Failed to convert string");
m2.resize((size_t)req);
int cch = WideCharToMultiByte(CP_UTF8, 0, m1.c_str(), (int)m1.length(), &m2[0], (int)m2.length(), NULL, NULL);
if(cch < 0 || cch == 0)
throw runtime_error("Failed to convert string");
// Should not be required
m2.resize((size_t)cch);
string s1, s2, s3, s4, s5;
SHA1 sha1; SHA224 sha224; SHA256 sha256; SHA384 sha384; SHA512 sha512;
HashFilter f1(sha1, new HexEncoder(new StringSink(s1)));
HashFilter f2(sha224, new HexEncoder(new StringSink(s2)));
HashFilter f3(sha256, new HexEncoder(new StringSink(s3)));
HashFilter f4(sha384, new HexEncoder(new StringSink(s4)));
HashFilter f5(sha512, new HexEncoder(new StringSink(s5)));
ChannelSwitch cs;
cs.AddDefaultRoute(f1);
cs.AddDefaultRoute(f2);
cs.AddDefaultRoute(f3);
cs.AddDefaultRoute(f4);
cs.AddDefaultRoute(f5);
StringSource ss(m2, true /*pumpAll*/, new Redirector(cs));
cout << "SHA-1: " << s1 << endl;
cout << "SHA-224: " << s2 << endl;
cout << "SHA-256: " << s3 << endl;
cout << "SHA-384: " << s4 << endl;
cout << "SHA-512: " << s5 << endl;
return 0;
}
You say ‘but it returns wrong digest’ – what are you comparing it with?
Key point: digests such as SHA-1 don't work with sequences of characters, but with sequences of bytes.
What you're doing in this snippet of code is generating an ad-hoc encoding of the unicode characters in the string "Ы". This encoding will (as it turns out) match the UTF-16 encoding if the characters in the string are all in the BMP (‘basic multilingual plane’, which is true in this case) and if the numbers that end up in wcharacter are integers representing unicode codepoints (which is sort-of probably correct, but not, I think, guaranteed).
If the digest you're comparing it with turns an input string into an sequence of bytes using the UTF-8 encoding (which is quite likely), then that will produce a different byte sequence from yours, so that the SHA-1 digest of that sequence will be different from the digest you calculate here.
So:
Check what encoding your test string is using.
You'd be best off using some library functions to specifically generate a UTF-16 or UTF-8 (as appropriate) encoding of the string you want to process, to ensure that the byte sequence you're working with is what you think it is.
There's an excellent introduction to unicode and encodings in the aptly-named document The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
This seems to work fine for me.
Rather than fiddling about trying to extract the pieces I simply cast the wide character buffer to a const byte* and pass that (and the adjusted size) to the hash function.
int main() {
std::wstring string(L"Привет");
CryptoPP::SHA1 sha1;
std::string hash;
CryptoPP::StringSource ss(
reinterpret_cast<const byte*>(string.c_str()), // cast to const byte*
string.size() * sizeof(std::wstring::value_type), // adjust for size
true,
new CryptoPP::HashFilter(sha1,
new CryptoPP::HexEncoder(
new CryptoPP::StringSink(hash)
)
)
);
std::cout << hash << std::endl;
return 0;
}
Output:
C6F8291E68E478DD5BD1BC2EC2A7B7FC0CEE1420
EDIT: To add.
The result is going to be encoding dependant. For example I ran this on Linux where wchar_t is 4 bytes. On Windows I believe wchar_t may be only 2 bytes.
For consistency it may be better to use UTF8 a store the text in a normal std::string. This also makes calling the API simpler:
int main() {
std::string string("Привет"); // UTF-8 encoded
CryptoPP::SHA1 sha1;
std::string hash;
CryptoPP::StringSource ss(
string,
true,
new CryptoPP::HashFilter(sha1,
new CryptoPP::HexEncoder(
new CryptoPP::StringSink(hash)
)
)
);
std::cout << hash << std::endl;
return 0;
}
Output:
2805AE8E7E12F182135F92FB90843BB1080D3BE8
I am new to cryptopp and have been struggling for a while with the creation of private keys for ECDSA signing.
I have a hex encoded private exponent E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB. This is stored as a string.
I want to use this to sign a text block using ECDSA. My code looks a bit like this
string Sig::genSignature(const string& privKeyIn, const string& messageIn)
{
AutoSeededRandomPool prng;
ECDSA<ECP, SHA256>::PrivateKey privateKey;
privateKey.AccessGroupParameters().Initialize(ASN1::secp256r1());
privateKey.Load(StringSource(privKeyIn, true, NULL).Ref());
ECDSA<ECP, SHA256>::Signer signer(privateKey);
// Determine maximum size, allocate a string with that size
size_t siglen = signer.MaxSignatureLength();
string signature(siglen, 0x00);
// Sign, and trim signature to actual size
siglen = signer.SignMessage(prng, (const byte *) messageIn.data(), (size_t) messageIn.length(), (byte*)signature.data());
signature.resize(siglen);
cout << signature.data() << endl;
return signature;
}
This code generates the following error in Visual studio on the when I try to do privateKey.load(...)
First-chance exception at 0x7693C42D in DLLTest.exe: Microsoft C++ exception: CryptoPP::BERDecodeErr at memory location 0x0033EEA8.
Unhandled exception at 0x7693C42D in DLLTest.exe: Microsoft C++ exception: CryptoPP::BERDecodeErr at memory location 0x0033EEA8.
I am guessing I am doing something a bit stupid... any help would be great???
PS I had a similar issue using ECDH for GMAC generation but got round this by saving the key as a SECByteBlock but this 'trick' doesnt seem to work in this case.
DLLTest.exe: Microsoft C++ exception: CryptoPP::BERDecodeErr ...
You have a private exponent, and not a private key. So you should not call Load on it. That's causing the Crypto++ BERDecodeErr exception.
The answer is detailed on the ECDSA wiki page, but its not readily apparent. You need to perform the following to initialize the privateKey given the curve and exponent::
string exp = "E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB";
exp.insert(0, "0x");
Integer x(exp.c_str());
privateKey.Initialize(ASN1::secp256r1(), x);
Prepending the "0x" ensures the Integer class will parse the ASCII string correctly. You can also append a "h" character to the string. You can see the parsing code for Integer class at Integer.cpp around line 2960 in the StringToInteger function.
Here's another way to do the same thing:
string exp = "E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB";
HexDecoder decoder;
decoder.Put((byte*)exp.data(), exp.size());
decoder.MessageEnd();
Integer x;
x.Decode(decoder, decoder.MaxRetrievable());
privateKey.Initialize(ASN1::secp256r1(), x);
The HexDecoder will perform the ASCII to binary conversion for you. The buffer held by the HexDecoder will then be consumed by the Integer using its Decode (BufferedTransformation &bt, size_t inputLen, Signedness=UNSIGNED) method.
And here is another way using HexDecoder (Crypto++ is as bad as scripting languages at times :)...
string exp = "E4A6CFB431471CFCAE491FD566D19C87082CF9FA7722D7FA24B2B3F5669DBEFB";
StringSource ss(exp, true /*punpAll*/, new HexDecoder);
Integer x;
x.Decode(ss, ss.MaxRetrievable());
privateKey.Initialize(ASN1::secp256r1(), x);
After initializing the key, you should validate it:
bool result = privateKey.Validate( prng, 3 );
if( !result ) { /* Handle error */ }
This will output binary data:
cout << signature.data() << endl;
If you want something printable/readable, run it though a Crypto++ HexEncoder.
for others looking for this later
string genSignature(const string& privKeyIn, const string& messageIn)
{
CryptoPP::Integer secretNumber(genSecretNumber(privKeyIn, messageIn));
AutoSeededRandomPool secretNumberGenerator;
if (encryptBase::debug)
{
cout << "secret number: " << secretNumber << endl;
}
SecByteBlock message(convertHexStrToSecByteBlock(messageIn));
ECDSA<ECP, SHA256>::PrivateKey privateKey;
string exp(privKeyIn);
exp.insert(0, "0x");
Integer x(exp.c_str());
privateKey.Initialize(ASN1::secp256r1(), x);
AutoSeededRandomPool prng;
if (!privateKey.Validate(prng, 3))
{
cout << "unable to verify key" << endl;
return "failed to verify key";
}
ECDSA<ECP, SHA256>::Signer signer(privateKey);
size_t siglen = signer.MaxSignatureLength();
string signature(siglen, 0x00);
siglen = signer.SignMessage(secretNumberGenerator, message.BytePtr(), message.size(), (byte*)signature.data());
signature.resize(siglen);
string encoded;
HexEncoder encoder;
encoder.Put((byte *) signature.data(), signature.size());
encoder.MessageEnd();
word64 size = encoder.MaxRetrievable();
if (size)
{
encoded.resize(size);
encoder.Get((byte*)encoded.data(), encoded.size());
}
return encoded;
}
I have a question referring to the encryption code in this question:
Crypto++ encrypt and decrypt in two different c++ programs
If I want to use a custom key/iv, how can I do this?
If I want to use a custom key/iv, how can I do this?
Just plug it into a cipher with a mode. There are plenty of modes to choose from, but you should use an authenticated encryption mode like EAX, CCM or GCM. See Category:Mode for discussion of the modes in Crypto++.
The code below takes a password or secret, keys a cipher, and then encrypts and encodes a message. Next, it decodes the encrypted message. Finally it prints some of the parameters.
try {
// KDF parameters
string password = "Super secret password";
unsigned int iterations = 15000;
char purpose = 0; // unused by Crypto++
// 32 bytes of derived material. Used to key the cipher.
// 16 bytes are for the key, and 16 bytes are for the iv.
SecByteBlock derived(32);
// KDF function
PKCS5_PBKDF2_HMAC<SHA256> kdf;
kdf.DeriveKey(derived.data(), derived.size(), purpose, (byte*)password.data(), password.size(), NULL, 0, iterations);
// Encrypt a secret message
string plaintext = "Attack at dawn", ciphertext, recovered;
// Key the cipher
EAX<AES>::Encryption encryptor;
encryptor.SetKeyWithIV(derived.data(), 16, derived.data() + 16, 16);
AuthenticatedEncryptionFilter ef(encryptor, new StringSink(ciphertext));
ef.Put((byte*)plaintext.data(), plaintext.size());
ef.MessageEnd();
// Key the cipher
EAX<AES>::Decryption decryptor;
decryptor.SetKeyWithIV(derived.data(), 16, derived.data() + 16, 16);
AuthenticatedDecryptionFilter df(decryptor, new StringSink(recovered));
df.Put((byte*)ciphertext.data(), ciphertext.size());
df.MessageEnd();
// Done with encryption and decryption
// Encode various parameters
HexEncoder encoder;
string key, iv, cipher;
encoder.Detach(new StringSink(key));
encoder.Put(derived.data(), 16);
encoder.MessageEnd();
encoder.Detach(new StringSink(iv));
encoder.Put(derived.data() + 16, 16);
encoder.MessageEnd();
encoder.Detach(new StringSink(cipher));
encoder.Put((byte*)ciphertext.data(), ciphertext.size());
encoder.MessageEnd();
// Print stuff
cout << "plaintext: " << plaintext << endl;
cout << "key: " << key << endl;
cout << "iv: " << iv << endl;
cout << "ciphertext: " << cipher << endl;
cout << "recovered: " << recovered << endl;
}
catch(CryptoPP::Exception& ex)
{
cerr << ex.what() << endl;
}
A run of the program produces the following output.
$ ./cryptopp-test.exe
plaintext: Attack at dawn
key: 7A8C7732898FB687669CB7DBEFBDD789
iv: 0AA980BABE72797E415C9B8979BF30EF
ciphertext: 197D0BD1A12577393AD1B1696B75D0FC6B8A142CF15B5F887AA965CE75F0
recovered: Attack at dawn
Even better, use an Integrated Encryption Scheme. Crypto++ provides two of them. The first is Elliptic Curve Integrated Encryption Scheme which operates over fields of elliptic curse. The second is Discrete Logarithm Integrated Encryption Scheme, which operates over the field of integers.
There's a number of non-obvious reason why its "even better", but the big one is its IND-CCA2. Other, more practical ones include: you can't reuse a security context because correct use is built into the system; and padding has been removed which greatly simplifies proofs and avoids potential oracles. The system is also predicated on Discrete Logs, which makes it a Diffie-Hellman based problem and its believed to be hard everywhere.