i am doing RSA encryption
i want to convert reference of public key class to string so that i can pass to server
//declaration
const CRSAPrivateKey &iRSAPrivateKey =iRSAKeyPair->PrivateKey();
const CRSAPublicKey &iRSAPublicKey =iRSAKeyPair->PublicKey() ;
i have convert &iRSAPublicKey into TBuf
i tried lot but fails to convert
plz help me out from situation
thanks in advance
If you're using CRSAPublicKey, you probably downloaded the Symbian cryptography library and its documentation from http://developer.symbian.com/main/tools_and_sdks/developer_tools/supported/crypto_api/index.jsp
Admitedly, the documentation isn't explicit but I would venture that you can just send the modulus and exponent components to any other RSA engine in order to reconstitute the public key:
HBufC8* localModulusBuffer = iRSAPublicKey.N().BufferLC();
HBufC8* localExponentBuffer = iRSAPublicKey.E().BufferLC();
Then simply copy the 2 HBufC8 into a TBuf if you really need it.
Just remember that methods with a trailing "C" leave what they return on the cleanup stack.
Related
For a my personal C++ project, I'd like been able to encrypt plain text data using private key. In my project I make an extensive usage of Poco C++ library, and I'd like handle such feature with it.
Currently I am successfully handling a private key file in order to create Poco::Crypto::RSAKey.
std::filesystem::path keyFile = std::filesystem::path("MyFile");
Poco::SharedPtr<Poco::Crypto::RSAKey> key(new Poco::Crypto::RSAKey("", keyFile.string()));
Poco::Crypto::CipherFactory& factory = Poco::Crypto::CipherFactory::defaultFactory();
Poco::Crypto::Cipher* pRSACipher = factory.createCipher(*key.get());
std::string plainText("MyTextToEncrypt");
std::string encrypted = pRSACipher->encryptString(plainText, Poco::Crypto::Cipher::ENC_BASE64_NO_LF);
Having a look at official Poco documentation I found out that both Poco::Crypto::RSAKey and Poco::Crypto::ECKey are deprecated. Looking for an alternative to such deprecated classes both in Poco documentation and all around in web, I could not understand why such classes are declared as deprecated. Moreover I could not find which classes shouold replace them.
In same time, reading Poco::Crypto::CipherFactory documentation, it is not depracted using methods which get Poco::Crypto::RSAKey for method Poco::Crypto::CipherFactory::createCipher.
Please, can somebody tell me if using Poco::Crypto::RSAKey is still advisable, or another class should be used? And which one?
Thanks in advance!
I have problems with my c++ mc-eliece implementation from Botan crypto library. There seems to be virtually only one example of it in the whole internet, with a link to it.
https://www.cryptosource.de/docs/mceliece_in_botan.pdf
But this example is 6 years old, hence it is totally outdated and the Botan docs do not provide any other.
The problem is basically, that unfortunatelly function names and specs have changed over time, hence i get a couple of compiler errors while i try to use them. I managed to demystify some of them by looking into the header implementations. But now i'm, frankly said, in front of a wall.
It would be great if anybody familar with the Botan MC-Eliece implementation, could give me a hint, how the current functions are called.
This is my code with marks. I removed a lot of unnecessary code and other implementations, to make it more readable. You will also not be able to make it run without the necessary modules, but i will try to write it down in a way, that somebody with Botan library should be able to run it.
//to compile: g++ -o mc_eliece mc_eliece.cpp -Wall -I/usr/local/include/botan-2/ -I/home/pi/projects/RNG_final/ -ltss2-esys -ltss2-rc -lbotan-2
#include <iostream>
#include <botan/rng.h>
#include <botan/system_rng.h>
#include <botan/mceies.h>
#include <botan/mceliece.h>
int main() {
Botan::size_t n = 1632; // Parameters for key generation
Botan::size_t t = 33;
// initialize RNG type
Botan::System_RNG rng; // is a standard Botan RNG
// create a new MCEliece private key with code length n and error weigth t
Botan::McEliece_PrivateKey sk1(rng, n, t); // actually works!
// derive the corresponding public key
Botan::McEliece_PublicKey pk1(*dynamic_cast<Botan::McEliece_PublicKey*>(&sk1)); // actually works!
// encode the public key
std::vector<uint8_t> pk_enc = pk1.subject_public_key(); // actually works!
// encode the private key
Botan::secure_vector<uint8_t> sk_enc = sk1.private_key_bits(); // had to replace sk1.pkcs8_private_key()
// encryption side: decode a serialized public key
Botan::McEliece_PublicKey pk(pk_enc);
McEliece_KEM_Encryptor enc(pk); // does not work, can't find a working corresponding function in the header
// perform encryption -> will find out if it works after upper case had been solved
std::pair<secure_vector<Botan::byte>,secure_vector<Botan::byte> > ciphertext__sym_key = enc.encrypt(rng);
secure_vector<Botan::byte> sym_key_encr = ciphertext__sym_key.second;
secure_vector<Botan::byte> ciphertext = ciphertext__sym_key.first;
// code used at the decrypting side: -> will find out if it works after upper case had been solved
// decode a serialized private key
McEliece_PrivateKey sk(sk_enc);
McEliece_KEM_Decryptor dec(sk);
// perform decryption -> will find out if it works after upper case had been solved
secure_vector<Botan::byte> sym_key_decr = dec.decrypt(&ciphertext[0],
ciphertext.size() );
// both sides now have the same 64-byte symmetric key.
// use this key to instantiate an authenticated encryption scheme.
// in case shorter keys are needed, they can simple be cut off.
return 0;
}
Thx for any help in advance.
I have now updated the example code in https://www.cryptosource.de/docs/mceliece_in_botan.pdf to reflect these the changes to Botan's new KEM API.
Please note that it is unnecessary to provide a salt value for the KDF when used in the context of a KEM for a public key scheme such as McEliece. That the KDF can accept a salt value here is a mere artefact of the API, owing to that fact that KDFs can be used also in other contexts. Specifically, a salt value is only necessary when deriving keys of secrets that potentially lack entropy, such as passwords. Then it mitigates attacks based on precomputed tables.
The McEliece unit test can be taken as reference (link).
Based on that code, your example can be rewritten as follows:
#include <botan/auto_rng.h>
#include <botan/data_src.h>
#include <botan/hex.h>
#include <botan/mceies.h>
#include <botan/mceliece.h>
#include <botan/pkcs8.h>
#include <botan/pubkey.h>
#include <botan/x509_key.h>
#include <cassert>
#include <iostream>
int main() {
// Parameters for McEliece key
// Reference: https://en.wikipedia.org/wiki/McEliece_cryptosystem#Key_sizes
Botan::size_t n = 1632;
Botan::size_t t = 33;
// Size of the symmetric key in bytes
Botan::size_t shared_key_size = 64;
// Key-derivation function to be used for key encapsulation
// Reference: https://botan.randombit.net/handbook/api_ref/kdf.html
std::string kdf{"KDF1(SHA-512)"};
// Salt to be used for key derivation
// NOTE: Pick salt dynamically, Botan recommds for example using a session ID.
std::vector<Botan::byte> salt{0x01, 0x02, 0x03};
Botan::AutoSeeded_RNG rng{};
// Generate private key
Botan::McEliece_PrivateKey priv{rng, n, t};
std::vector<Botan::byte> pub_enc{priv.subject_public_key()};
Botan::secure_vector<Botan::byte> priv_enc{Botan::PKCS8::BER_encode(priv)};
// Encrypting side: Create encapsulated symmetric key
std::unique_ptr<Botan::Public_Key> pub{Botan::X509::load_key(pub_enc)};
Botan::PK_KEM_Encryptor encryptor{*pub, rng, kdf};
Botan::secure_vector<Botan::byte> encapsulated_key{};
Botan::secure_vector<Botan::byte> shared_key1{};
encryptor.encrypt(encapsulated_key, shared_key1, shared_key_size, rng, salt);
std::cout << "Shared key 1: " << Botan::hex_encode(shared_key1) << std::endl;
// Decrypting side: Unpack encapsulated symmetric key
Botan::DataSource_Memory priv_enc_src{priv_enc};
std::unique_ptr<Botan::Private_Key> priv2{
Botan::PKCS8::load_key(priv_enc_src)};
Botan::PK_KEM_Decryptor decryptor{*priv2, rng, kdf};
Botan::secure_vector<Botan::byte> shared_key2{
decryptor.decrypt(encapsulated_key, shared_key_size, salt)};
std::cout << "Shared key 2: " << Botan::hex_encode(shared_key2) << std::endl;
assert(shared_key1 == shared_key2);
return 0;
}
I tested this code against 2.15.0. Example output:
$ g++ -g $(pkg-config --cflags --libs botan-2) test.cpp
$ ./a.out
Shared key 1: 32177925CE5F3D607BA45575195F13B9E0123BD739580DFCF9AE53D417C530DB115867E5E377735CB405CDA6DF7866C647F85FDAC5C407BB2E2C3A8E7D41A5CC
Shared key 2: 32177925CE5F3D607BA45575195F13B9E0123BD739580DFCF9AE53D417C530DB115867E5E377735CB405CDA6DF7866C647F85FDAC5C407BB2E2C3A8E7D41A5CC
Some notes on what I changed compared to the code you gave:
Every Botan::Private_Key is also a Botan::Public_Key (reference). Therefore, we do not need to cast the generated private key to a public key. Instead, we can call subject_public_key directly on the private key to obtain the public key encoding.
Using private_key_bits to obtain the raw private key bits for serialization works, but using PKCS8 might be more robust in terms of interoperability. So I would use Botan::PKCS8::BER_encodeon the private key for that reason.
Constructing the McEliece_PublicKey directly from the public key encoding did not work for me, as the constructor expects the raw public key bits and not the x509 public key encoding. For this reason, I had to use Botan::X509::load_key.
The McEliece_KEM_Encryptor has been removed (reference):
Add generalized interface for KEM (key encapsulation) techniques. Convert McEliece KEM to use it. The previous interfaces McEliece_KEM_Encryptor and McEliece_KEM_Decryptor have been removed. The new KEM interface now uses a KDF to hash the resulting keys; to get the same output as previously provided by McEliece_KEM_Encryptor, use "KDF1(SHA-512)" and request exactly 64 bytes.
Instead you would use Botan::PK_KEM_Encryptor, which takes the public key as a parameter and infers the encryption algorithm to be used from the public key. You can see from the code that it becomes more flexible this way, we do not have to reference McEliece after the key has been generated. If we want to switch to a different algorithm, we only have to change the key generation part, but not the key encapsulation part.
The same applies for McEliece_KEM_Decryptor.
As noted in the above quote, the new KEM interface allows to specify the key-derivation function to be used for KEM, as well as the desired size of the symmetric key. Also, you can pass a salt that will be used for key derivation. For the salt, you would use some value that is unique to your application or even better unique to the current session (reference):
Typically salt is a label or identifier, such as a session id.
The background of this is that I'm going to be given a binary file that's been signed using CryptoPP, with the signature included in the file, and the public key, and I want to be able to verify the file and extract the data (without its signature) into another file.
The code that's used to do this with Crypto++ is relatively trivial, but I've been looking around and can't find anything similar. Everything I've looked at seems to be messages where the signature is separate, but I'm not sure how I work out where, in the file, to pull the signature out.
The reason I want to do this is that, given I have a need for OpenSSL for other reasons, I don't want to have to also include Crypto++ in my application if I don't have to.
The existing verification code is like this:
const char *key = kPublicKey; // hex encoded public key
CryptoPP::StringSource f( key, true, new CryptoPP::HexDecoder);
CryptoPP::RSASS< CryptoPP::PSSR, CryptoPP::SHA1 >::Verifier verifier(f);
CryptoPP::FileSource( packageFilename.toStdString().c_str(), true,
new CryptoPP::SignatureVerificationFilter(
verifier,
new CryptoPP::FileSink( recoveredFilename.toStdString().c_str()),
CryptoPP::SignatureVerificationFilter::THROW_EXCEPTION | CryptoPP::SignatureVerificationFilter::PUT_MESSAGE )
);
Which, as I said, looks trivial but, presumably, there's a lot going on under the hood. Essentially that's given the name of a file that includes the content AND the signature, and a "verifier" which incorporates the public key, and checks if the file content is correct and produces a copy of the file without the signature in it.
What I want to do, then, is replicate this using OpenSSL, but I'm not sure whether there are functions in OpenSSL that will let me just pass the whole file and public key and let it work out where the signature is itself, or if I need to pull the signature out of the file somehow (it's stored at the end of the file!), then pass the file content bit by bit into EVP_DigestVerifyUpdate, then pass the signature into EVP_DigestVerifyFinal function.
Any help would be very gratefully appreciate.
UPDATE
I'm new to digital signing, so may have missed the significance of PSSR here. After some further research I now understand that the idea of PSSR is that the signature contains the message. It may be that the whole of the above could be summarised to:
Given a public key, and a PSSR signature generated using Crypto++, is it possible to verify the message and extract the original data using OpenSSL?
From the files (signatures) I've looked at, that have been generated by Crypto++ using its PSSR implementation, it looks like the data isn't changed; it just has extra stuff added at the end (the SIGNATURE_AT_END flag is used). I am struggling to find any reference to PSSR/PSS-R and OpenSSL using Google.
I am trying to encrypt a file using AES with crypto++. I can see the functions EncryptFile and DecryptFile which use DefaultEncryptorWithMAC/DefaultDecryptorWithMAC from test.cpp in crypto++.
void EncryptFile(const char *in, const char *out, const char *passPhrase)
{
FileSource f(in, true, new DefaultEncryptorWithMAC(passPhrase, new FileSink(out)));
}
void DecryptFile(const char *in, const char *out, const char *passPhrase)
{
FileSource f(in, true, new DefaultDecryptorWithMAC(passPhrase, new FileSink(out)));
}
However I want to use AES and as far as I understand the default encryption scheme is DES_EDE2. Is there a build in way to handle this?
I don't need a MAC so something similar to the DefaultEncryptor/DefaultDecryptor class pair would be good enough.
Also I would prefere to use a random SecByteBlock instead of a passphrase as the following
// Generate a random key
SecByteBlock key(0x00, AES::DEFAULT_KEYLENGTH);
rnd.GenerateBlock( key, key.size() );
However I want to use AES and as far as I understand the default encryption scheme is DES_EDE2. Is there a build in way to handle this?
The project changed the defaults. The commit of interest is Commit bfbcfeec7ca7, and the issue of interest is Issue 345. It will be available in Crypto++ 5.7.
For those who need the old algorithms, they can use LegacyEncryptor, LegacyDecryptor, LegacyEncryptorWithMAC and LegacyDecryptorWithMAC.
The Mash function was retained to keep things less complicated. If these were new classes, they would have used HKDF to extract and expand the entropy; and used PBKDF to grind over the derived key. Since both old and new needed support, we opted for the single source solution. As far as I know, the mash function meets the security goals.
If you are not using Master, then you can also change the following typedfs in default.h to suit your taste. However, you will need to recompile the library afterwards:
typedef DES_EDE2 DefaultBlockCipher;
typedef SHA DefaultHashModule;
typedef HMAC<DefaultHashModule> DefaultMAC;
We have 2 separate products that need to communicate with each other via web services.
What is the best-practice to support versioining of the API?
I have this article from 2004 claiming there is no actual standard, and only best practices. Any better solutions? How do you solve WS versioning?
Problem Description
System A
Client
class SystemAClient{
SystemBServiceStub systemB;
public void consumeFromB(){
SystemBObject bObject = systemB.getSomethingFromB(new SomethingFromBRequest("someKey"));
}
}
Service
class SystemAService{
public SystemAObject getSomethingFromA(SomethingFromARequest req){
return new SystemAObjectFactory.getObject(req);
}
}
Transferable Object
Version 1
class SystemAObject{
Integer id;
String name;
... // getters and setters etc;
}
Version 2
class SystemAObject{
Long id;
String name;
String description;
... // getters and setters etc;
}
Request Object
Version 1
class SomethingFromARequest {
Integer requestedId;
... // getters and setters etc;
}
Version 2
class SomethingFromARequest {
Long requestedId;
... // getters and setters etc;
}
System B
Client
class SystemBClient{
SystemAServiceStub systemA;
public void consumeFromA(){
SystemAObject aObject = systemA.getSomethingFromA(new SomethingFromARequest(1));
aObject.getDescription() // fail point
// do something with it...
}
}
Service
class SystemBService{
public SystemBObject getSomethingFromB(SomethingFromBRequest req){
return new SystemBObjectFactory.getObject(req);
}
}
Transferable Object
Version 1
class SystemBObject{
String key;
Integer year;
Integer month;
Integer day;
... // getters and setters etc;
}
Version 2
class SystemBObject{
String key;
BDate date;
... // getters and setters etc;
}
class BDate{
Integer year;
Integer month;
Integer day;
... // getters and setters etc;
}
Request Object
Version 1
class SomethingFromBRequest {
String key;
... // getters and setters etc;
}
Version 2
class SomethingFromBRequest {
String key;
BDate afterDate;
BDate beforeDate;
... // getters and setters etc;
}
Fail Scenarios
If a System A client of version 1 calls a System B service of version 2 it can fail on:
missing methods on SystemBObject (getYear(), getMonth(), getDay())
Unknown type BDate
If a System A client of version 2 calls a System B service of version 1 it can fail on:
Unknown type BDate on the SomethingFromBRequest (A client uses a newer B request object that B version 1 doesn't recognize)
If the System A client is smart enough to use version 1 of the request object, it can fail on missing methods on the SystemBObject object (getDate())
If a System B client of version 1 calls a System A service of version 2 it can fail on:
Type missmatch or overflow on SystemAObject (returned Long but expected Integer)
If a System B client of version 2 calls a System A service of version 1 it can fail on:
Type missmatch or overflow on SystemARequest (request Long instead of Integer)
If the request passed somehow, casting issues (the stub is Long but the service returns an Integer not nessecarily compatible in all WS implementations)
Possible solutions
Use numbers when advancing versions: e.g. SystemAObject1, SystemBRequest2 etc but this is missing a an API for matching source / target version
In the signature, pass XML and not objects (yuck, pass escaped XML in XML, double serialization, deserialization / parsing, unparsing)
Other: e.g. does Document/literal / WS-I has a remedy?
I prefer the Salesforce.com method of versioning. Each version of the Web Services gets a distinct URL in the format of:
http://api.salesforce.com/{version}/{serviceName}
So you'll have Web Service URLs that look like:
http://api.salesforce.com/14/Lead
http://api.salesforce.com/15/Lead
and so on...
With this method, you get the benefits of:
You always know which version you're talking to.
Backwards compatibility is maintained.
You don't have to worry about dependency issues. Each version has the complete set of services. You just have to make sure you don't mix versions between calls (but that's up to the consumer of the service, not you as the developer).
The solution is to avoid incompatible changes to your types.
Take for example, SystemBObject. You describe "version 1" and "version 2" of this type, but they are not the same type at all. A compatible change to this type involves only Adding properties, and not changing the type of any existing properties. Your hypothetical "Version update" has violated both of those constraints.
By following that one guildeline, you can avoid ALL of the problems you described.
Therefore, if this is your type definition in version 1
class SystemBObject{ // version 1
String key;
Integer year;
Integer month;
Integer day;
... // getters and setters etc;
}
Then, this cannot be your type definition in v2:
// version 2 - NO NO NO
class SystemBObject{
String key;
BDate date;
... // getters and setters etc;
}
...because it has eliminated existing fields. If that is the change you need to make, it is not a new "version", it is a new type, and should be named as such, both in code and in the serialization format.
Another example: If this is your existing v1 type :
class SomethingFromARequest {
Integer requestedId;
... // getters and setters etc;
}
... then this is not a valid "v2" of that type:
class SomethingFromARequest {
Long requestedId;
... // getters and setters etc;
}
...because you have changed the type of the existing property.
These constraints are explained in much more detail a mostly technology-neutral way in Microsoft's Service Versioning article.
Aside from avoiding that source of incompatibility, you can and should include a version number in the type. This can be a simple serial number. If you are in the habit of logging or auditing messages, and bandwidth and storage space is not a problem, you may want to augment the simple integer with a UUID to identify an instance of each unique version of a type.
Also, you can design forward-compatibility into your data transfer objects, by using lax processing, and mapping "extra" data into an "extra" field. If XML is your serialization format, then you might use xsd:xmlAny or xsd:any and processContents="lax" to capture any unrecognized schema elements, when a v1 service receives a v2 request (more). If your serialization format is JSON, with its more open content model, then this comes for free.
I think something else to keep in mind is your client base, are you publishing this service publicly, or is it restricted to a set of known agents?
I'm involved in the latter situation, and we've found it's not that difficult to solve this via simple communication / stakeholdering.
Though it's only indirectly related to your question, we've found that basing our versioning number on compatibility seems to work quite well. Using A.B.C as an example...
A: Changes which require recompilation (breaks backwards compatibility)
B: Changes which do not require recompilation, but have additional features not available without doing so (new operations etc.)
C: Changes to the underlying mechanics that do not alter the WSDL
I know this is late to the game, but I've been digging into this issue rather deeply. I really think the best answer involves another piece of the puzzle: a service intermediary. Microsoft's Managed Services Engine is an example of one - I'm sure others exist as well. Basically, by changing the XML namespace of your web service (to include a version number or date, as the linked article mentions), you allow the intermediary the capability to route the various client calls to the appropriate server implementations. An additional (and, IMHO, very cool) feature of MSE is the ability to perform policy-based transformation. You can define XSLT transforms that convert v1 requests into v2 requests, and v2 responses into v1 responses, allowing you to retire the v1 service implementations without breaking client implementations.