How to encrypt and decrypt a const char* in WinRT - c++

I have been trying to write encrypt and decrypt functions whose signatures require the input and the output strings to be void* type only. The code works fine if the inputs can be specified as IBuffer^ but in the other case the source string and the encrypted->decrypted string do not match.
CodeIBuffer^ byteArrayToIBufferPtr(byte *source, int size)
{
Platform::ArrayReference<uint8> blobArray(source, size);
IBuffer ^buffer = CryptographicBuffer::CreateFromByteArray(blobArray);
return buffer;
}
byte* IBufferPtrToByteArray(IBuffer ^buffer)
{
Array<unsigned char,1U> ^platArray = ref new Array<unsigned char,1U>(256);
CryptographicBuffer::CopyToByteArray(buffer,&platArray);
byte *dest = platArray->Data;
return dest;
}
int DataEncryption::encryptData(EncryptionAlgorithm algo, int keySize, void* srcData, const unsigned int srcSize,
void*& encData, unsigned int& encSize)
{
LOG_D(TAG, "encryptData()");
if(srcData == nullptr)
{
LOG_E(TAG,"");
return DataEncryption::RESULT_EMPTY_DATA_ERROR;
}
if(srcSize == 0)
{
LOG_E(TAG,"");
return DataEncryption::RESULT_SIZE_ZERO_ERROR;
}
IBuffer^ encrypted;
IBuffer^ buffer;
IBuffer^ iv = nullptr;
String^ algName;
bool cbc = false;
switch (algo)
{
case DataEncryption::ENC_DEFAULT:
algName = "AES_CBC";
cbc = true;
break;
default:
break;
}
// Open the algorithm provider for the algorithm specified on input.
SymmetricKeyAlgorithmProvider^ Algorithm = SymmetricKeyAlgorithmProvider::OpenAlgorithm(algName);
// Generate a symmetric key.
IBuffer^ keymaterial = CryptographicBuffer::GenerateRandom((keySize + 7) / 8);
CryptographicKey^ key;
try
{
key = Algorithm->CreateSymmetricKey(keymaterial);
}
catch(InvalidArgumentException^ e)
{
LOG_E(TAG,"encryptData(): Could not create key.");
return DataEncryption::RESULT_ERROR;
}
// CBC mode needs Initialization vector, here just random data.
// IV property will be set on "Encrypted".
if (cbc)
iv = CryptographicBuffer::GenerateRandom(Algorithm->BlockLength);
// Set the data to encrypt.
IBuffer ^srcDataBuffer = byteArrayToIBufferPtr(static_cast<byte*>(srcData),256);
// Encrypt and create an authenticated tag.
encrypted = CryptographicEngine::Encrypt(key, srcDataBuffer, iv);
//encData = encrypted;
byte *bb = IBufferPtrToByteArray(encrypted);
encData = IBufferPtrToByteArray(encrypted);
encSize = encrypted->Length;
return DataEncryption::RESULT_SUCCESS;
}
int DataEncryption::decryptData(EncryptionAlgorithm algo, int keySize, void* encData, const unsigned int encSize,
void*& decData, unsigned int& decSize)
{
LOG_D(TAG, "decryptData()");
if(encData == nullptr)
{
LOG_E(TAG,"");
return DataEncryption::RESULT_EMPTY_DATA_ERROR;
}
if(encSize == 0)
{
LOG_E(TAG,"");
return DataEncryption::RESULT_SIZE_ZERO_ERROR;
}
IBuffer^ encrypted;
IBuffer^ decrypted;
IBuffer^ iv = nullptr;
String^ algName;
bool cbc = false;
switch (algo)
{
case DataEncryption::ENC_DEFAULT:
algName = "AES_CBC";
cbc = true;
break;
default:
break;
}
// Open the algorithm provider for the algorithm specified on input.
SymmetricKeyAlgorithmProvider^ Algorithm = SymmetricKeyAlgorithmProvider::OpenAlgorithm(algName);
// Generate a symmetric key.
IBuffer^ keymaterial = CryptographicBuffer::GenerateRandom((keySize + 7) / 8);
CryptographicKey^ key;
try
{
key = Algorithm->CreateSymmetricKey(keymaterial);
}
catch(InvalidArgumentException^ e)
{
LOG_E(TAG,"encryptData(): Could not create key.");
return DataEncryption::RESULT_ERROR;
}
// CBC mode needs Initialization vector, here just random data.
// IV property will be set on "Encrypted".
if (cbc)
iv = CryptographicBuffer::GenerateRandom(Algorithm->BlockLength);
// Set the data to decrypt.
byte *cc = static_cast<byte*>(encData);
IBuffer ^encDataBuffer = byteArrayToIBufferPtr(cc,256);
// Decrypt and verify the authenticated tag.
decrypted = CryptographicEngine::Decrypt(key, encDataBuffer, iv);
byte *bb = IBufferPtrToByteArray(decrypted);
decData = IBufferPtrToByteArray(decrypted);
decSize = decrypted->Length;
return DataEncryption::RESULT_SUCCESS;
}

I'm guessing that the problem is with this function:
byte* IBufferPtrToByteArray(IBuffer ^buffer)
{
Array<unsigned char,1U> ^platArray = ref new Array<unsigned char,1U>(256);
CryptographicBuffer::CopyToByteArray(buffer,&platArray);
byte *dest = platArray->Data;
return dest;
}
What you're doing there is allocating a new Platform::Array<byte>^ with 1 reference, then getting a pointer to its internally-managed storage, then returning that pointer-- at which point the Array is being dereferenced and is thus deallocating its underlying storage. Thus the pointer you return refers to freed memory. The next allocation is likely to overwrite those bytes.
What you'll need to do is take the return-by-reference Array<byte>^ from CopyToByteArray() (which creates a new Array, presumably wrapping the bytes of the input IBuffer^, and returns it) and copy that array's contents.
Your end result will function similarly to this snippet from the Readium SDK project, which takes a std::string instance, hashes it using SHA-1, and copies the hash data into a member variable uint8_t _key[KeySize]:
using namespace ::Platform;
using namespace ::Windows::Foundation::Cryptography;
using namespace ::Windows::Foundation::Cryptography::Core;
auto byteArray = ArrayReference<byte>(reinterpret_cast<byte*>(const_cast<char*>(str.data())), str.length());
auto inBuf = CryptographicBuffer::CreateFromByteArray(byteArray);
auto keyBuf = HashAlgorithmProvider::OpenAlgorithm(HashAlgorithmNames::Sha1)->HashData(inBuf);
Array<byte>^ outArray = nullptr;
CryptographicBuffer::CopyToByteArray(keyBuf, &outArray);
memcpy_s(_key, KeySize, outArray->Data, outArray->Length);
The steps:
Create an ArrayReference<byte> corresponding to the bytes in the std::string (no copying).
Pass that to CryptographicBuffer::CreateFromByteArray() to get your IBuffer^. Still no copying of data.
Call your hash/encryption function, passing the IBuffer^ you just made. You get another IBuffer^ in return, which may or may not be using the exact same storage (that's really up to the implementation of the algorithm, I think).
Create a variable of type Array<byte>^. Don't allocate an object, you're going to be given one by reference.
Pass the address of that object into CryptographicBuffer::CopyToByteArray() to receive a copy of your key data.
While that Array^ remains valid, copy its bytes into your native array.

Related

using a bytes field as proxy for arbitrary messages

Hello nano developers,
I'd like to realize the following proto:
message container {
enum MessageType {
TYPE_UNKNOWN = 0;
evt_resultStatus = 1;
}
required MessageType mt = 1;
optional bytes cmd_evt_transfer = 2;
}
message evt_resultStatus {
required int32 operationMode = 1;
}
...
The dots denote, there are more messages with (multiple) primitive containing datatypes to come. The enum will grow likewise, just wanted to keep it short.
The container gets generated as:
typedef struct _container {
container_MessageType mt;
pb_callback_t cmd_evt_transfer;
} container;
evt_resultStatus is:
typedef struct _evt_resultStatus {
int32_t operationMode;
} evt_resultStatus;
The field cmd_evt_transfer should act as a proxy of subsequent messages like evt_resultStatus holding primitive datatypes.
evt_resultStatus shall be encoded into bytes and be placed into the cmd_evt_transfer field.
Then the container shall get encoded and the encoding result will be used for subsequent transfers.
The background why to do so, is to shorten the proto definition and avoid the oneof thing. Unfortunately syntax version 3 is not fully supported, so we can not make use of any fields.
The first question is: will this approach be possible?
What I've got so far is the encoding including the callback which seems to behave fine. But on the other side, decoding somehow skips the callback. I've read issues here, that this happened also when using oneof and bytes fields.
Can someone please clarify on how to proceed with this?
Sample code so far I got:
bool encode_msg_test(pb_byte_t* buffer, int32_t sval, size_t* sz, char* err) {
evt_resultStatus rs = evt_resultStatus_init_zero;
rs.operationMode = sval;
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
/*encode container*/
container msg = container_init_zero;
msg.mt = container_MessageType_evt_resultStatus;
msg.cmd_evt_transfer.arg = &rs;
msg.cmd_evt_transfer.funcs.encode = encode_cb;
if(! pb_encode(&stream, container_fields, &msg)) {
const char* local_err = PB_GET_ERROR(&stream);
sprintf(err, "pb_encode error: %s", local_err);
return false;
}
*sz = stream.bytes_written;
return true;
}
bool encode_cb(pb_ostream_t *stream, const pb_field_t *field, void * const *arg) {
evt_resultStatus* rs = (evt_resultStatus*)(*arg);
//with the below in place a stream full error rises
// if (! pb_encode_tag_for_field(stream, field)) {
// return false;
// }
if(! pb_encode(stream, evt_resultStatus_fields, rs)) {
return false;
}
return true;
}
//buffer holds previously encoded data
bool decode_msg_test(pb_byte_t* buffer, int32_t* sval, size_t msg_len, char* err) {
container msg = container_init_zero;
evt_resultStatus res = evt_resultStatus_init_zero;
msg.cmd_evt_transfer.arg = &res;
msg.cmd_evt_transfer.funcs.decode = decode_cb;
pb_istream_t stream = pb_istream_from_buffer(buffer, msg_len);
if(! pb_decode(&stream, container_fields, &msg)) {
const char* local_err = PB_GET_ERROR(&stream);
sprintf(err, "pb_encode error: %s", local_err);
return false;
}
*sval = res.operationMode;
return true;
}
bool decode_cb(pb_istream_t *istream, const pb_field_t *field, void **arg) {
evt_resultStatus * rs = (evt_resultStatus*)(*arg);
if(! pb_decode(istream, evt_resultStatus_fields, rs)) {
return false;
}
return true;
}
I feel, I don't have a proper understanding of the encoding / decoding process.
Is it correct to assume:
the first call of pb_encode (in encode_msg_test) takes care of the mt field
the second call of pb_encode (in encode_cb) handles the cmd_evt_transfer field
If I do:
bool encode_cb(pb_ostream_t *stream, const pb_field_t *field, void * const *arg) {
evt_resultStatus* rs = (evt_resultStatus*)(*arg);
if (! pb_encode_tag_for_field(stream, field)) {
return false;
}
if(! pb_encode(stream, evt_resultStatus_fields, rs)) {
return false;
}
return true;
}
then I get a stream full error on the call of pb_encode.
Why is that?
Yes, the approach is reasonable. Nanopb callbacks do not care what the actual data read or written by the callback is.
As for why your decode callback is not working, you'll need to post the code you are using for decoding.
(As an aside, Any type does work in nanopb and is covered by this test case. But the type_url included in all Any messages makes them have a quite large overhead.)

ReadOnlySequence<char> conversion from ReadOnlySequence<byte>?

I'm trying to use System.IO.Pipelines to parse large text files.
But I can't find no conversion function from ReadOnlySequence to ReadOnlySequence. For example like MemoryMarshal.Cast<byte,char>.
IMHO it is pretty useless having a generic ReadOnlySequence<T> if there is only one particular type (byte) applicable.
static async Task ReadPipeAsync(PipeReader reader, IStringValueFactory factory)
{
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
//ReadOnlySequence<char> chars = buffer.CastTo<char>(); ???
}
}
You would have to write a conversion operator to achieve this cast. You cannot cast it explicitly. Be aware that a char[] is two bytes, so you need to choose your encoding algorithm.
IMHO it is pretty useless having a generic ReadOnlySequence<T> if
there is only one particular type (byte) applicable.
While it's true that System.IO.Pipelines will only give you a ReadOnlySequence<byte> because of the fact that a PipeReader is attached to a Stream which is just a stream of bytes, there are other use cases for a ReadOnlySequence<T> eg,
ReadOnlySequence<char> roChars = new ReadOnlySequence<char>("some chars".ToCharArray());
ReadOnlySequence<string> roStrings = new ReadOnlySequence<string>(new string[] { "string1", "string2", "Another String" });
Your conversion operator would have similar logic to the below, but you would set your encoding appropriately.
static void Main(string[] args)
{
// create a 64k Readonly sequence of random bytes
var ros = new ReadOnlySequence<byte>(GenerateRandomBytes(64000));
//Optionally extract the section of the ReadOnlySequence we are interested in
var mySlice = ros.Slice(22222, 55555);
char[] charArray;
// Check if the slice is a single segment - not really necessary
// included for explanation only
if(mySlice.IsSingleSegment)
{
charArray = Encoding.ASCII.GetString(mySlice.FirstSpan).ToCharArray();
}
else
// Could only do this and always assume multiple spans
// which is highly likley for a PipeReader stream
{
Span<byte> theSpan = new byte[ros.Length];
mySlice.CopyTo(theSpan);
// ASCII Encoding - one byte of span = 2 bytes of char
charArray = Encoding.ASCII.GetString(theSpan).ToCharArray();
}
// Convert the char array back to a ReadOnlySegment<char>
var rosChar = new ReadOnlySequence<char>(charArray);
}
public static byte[] GenerateRandomBytes(int length)
{
// Create a buffer
byte[] randBytes;
if (length >= 1)
randBytes = new byte[length];
else
randBytes = new byte[1];
// Create a new RNGCryptoServiceProvider.
System.Security.Cryptography.RNGCryptoServiceProvider rand =
new System.Security.Cryptography.RNGCryptoServiceProvider();
// Fill the buffer with random bytes.
rand.GetBytes(randBytes);
// return the bytes.
return randBytes;
}

C++/CX How to convert from IRandomAccessStream^ to bytes and back. (UWP)

So I have a stream, what I want to be able is to transfer it into unsigned char * bytes and back to usable stream.
This stream is image (it is binary if it is important?)
Basically what I am trying now is as follows:
IRandomAccessStream^ inputStream;
DataWriter^ dataWriter = ref new DataWriter(inputStream);
IBuffer^ buffer1 = dataWriter->DetachBuffer();
unsigned char * bytes = GetPointerToPixelData(buffer1);
DataWriter ^writer = ref new DataWriter();
writer->WriteBytes(Platform::ArrayReference<BYTE>(bytes, sizeof(bytes)));
task<DataWriterStoreOperation^>(writer->StoreAsync()).get();
task<Windows::Foundation::IAsyncOperation<bool>>(writer->FlushAsync()).get();
IBuffer ^buffer2 = writer->DetachBuffer();
IRandomAccessStream^ newStream;
task<Windows::Foundation::IAsyncOperationWithProgress<unsigned int, unsigned int>>(newStream->WriteAsync(buffer2)).get();
task<Windows::Foundation::IAsyncOperation<bool>>(newStream->FlushAsync()).get();
UseNewStream(newStream)
I have added all of these task<...> because it is not working without them, and I am not sure how to make it work?
Function GetPointerToPixelData I found online and is following:
byte* GetPointerToPixelData(IBuffer^ buffer)
{
// Cast to Object^, then to its underlying IInspectable interface.
Object^ obj = buffer;
ComPtr<IInspectable> insp(reinterpret_cast<IInspectable*>(obj));
// Query the IBufferByteAccess interface.
ComPtr<IBufferByteAccess> bufferByteAccess;
insp.As(&bufferByteAccess);
// Retrieve the buffer data.
byte* pixels = nullptr;
bufferByteAccess->Buffer(&pixels);
return pixels;
}
Thanks! :)
Firstly, if you want to transfer the image stream to bytes, you need to read the stream by DataReader, not by DataWriter which is for write data. Secondly, the DetachBuffer() method is for "Detaches the buffer that is associated with the data reader", not read the buffer. Lastly, DataReader can directly read bytes by ReadBytes(Byte[]) method. For example:
uint64 length = 0;
BYTE *extracted;
void CleanImagetobyte::MainPage::btnconverttobyte_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
{
create_task(KnownFolders::GetFolderForUserAsync(nullptr /* current user */, KnownFolderId::PicturesLibrary))
.then([this](StorageFolder^ picturesFolder)
{
return picturesFolder->GetFileAsync("B.jpg");
}).then([this](task<StorageFile^> task)
{
try
{
StorageFile^ file = task.get();
auto name = file->Name;
return file->OpenAsync(FileAccessMode::Read);
}
catch (Exception^ e)
{
// I/O errors are reported as exceptions.
}
}).then([this](task<IRandomAccessStream^> task)
{
IRandomAccessStream^ inputStream = task.get();
length = inputStream->Size;
IBuffer^ buffer = ref new Buffer(inputStream->Size);
inputStream->ReadAsync(buffer, inputStream->Size, InputStreamOptions::None);
DataReader^ reader = DataReader::FromBuffer(buffer);
extracted = new BYTE[buffer->Length];
reader->ReadBytes(Platform::ArrayReference<BYTE>(extracted, buffer->Length));
});
}
void CleanImagetobyte::MainPage::btnconvertback_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
{
create_task(KnownFolders::GetFolderForUserAsync(nullptr /* current user */, KnownFolderId::PicturesLibrary))
.then([this](StorageFolder^ picturesFolder)
{
return picturesFolder->CreateFileAsync("newB.jpg", CreationCollisionOption::ReplaceExisting);
}).then([this](task<StorageFile^> task)
{
StorageFile^ file = task.get();
Array<byte>^ arr = ref new Array<byte>(extracted, length);
FileIO::WriteBytesAsync(file, arr);
});
}
More details for read and write a file please reference this document.

Golang CGO with large char pointer - SEGSERV

I have a large amount of data being read from TagLib library and passed to GoLang (mpeg image data).
Here is where data is fetched:
void audiotags_mpeg_artwork(TagLib::MPEG::File *mpegFile, int id) {
TagLib::ID3v2::Tag *id3v2 = mpegFile->ID3v2Tag(false);
if (id3v2!=nullptr) {
const TagLib::ID3v2::FrameList frameList = id3v2->frameListMap()["APIC"];
for(auto it = frameList.begin(); it != frameList.end(); it++) {
TagLib::ID3v2::AttachedPictureFrame * frame = (TagLib::ID3v2::AttachedPictureFrame *)(*it);
if (frame!=nullptr && frame->size() > 0) {
const auto &apicBase64 = frame->picture().toBase64();
auto len = apicBase64.size();
if (len > 0) {
// Generate memory for key
char* key = new char[5];
memcpy(key, "APIC", 4);
key[4]='\0';
// Generate memory for picture data
char* val = new char[len];
memcpy (val, apicBase64.data(), len);
// Send to GoLang
go_map_audiotags(id, key, val);
// Free memory
delete[] key;
delete[] val;
}
}
}
}
}
At this point, go_map_autotags works (I use a similar method for other data). This also works for other picture data, however depending on the size this will crash with:
unexpected fault address 0x766a000
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0x766a000 pc=0x404530b]
Within GoLang, I have the following export:
//export go_map_audiotags
func go_map_audiotags(id C.int, key *C.char, val *C.char) {
m := maps[int(id)]
k := strings.ToLower(C.GoString(key))
log.Println("go_map_audiotags k:", k) // <--- works
v := C.GoString(val) // <--- crashes
log.Println("go_map_audiotags v:", v) // <--- Does not reach
m[k] = v
}
Is there a bette way I should be transporting this data? I assume what's happening is:
1) The C.char limit be being reached
2) C++ is, for some reason, recycling the memory before setting v in GoLang
The data stored in val is not null-terminated. In your C code, when you make a copy using memcpy, the null terminator is not included. In the C code, change the code to:
// Generate memory for picture data
char* val = new char[len+1];
memcpy (val, apicBase64.data(), len);
val[len] = '\0';

Unable to do RSA Encryption/Decryption using Crypto++ (isValidCoding is false)

I am using Crypto++ to encrypt an array of bytes using RSA. I have followed Crypto++ wiki's samples with no luck getting them to work. Encryption and Decryption in all the samples are done within a single process but I am trying to decrypt the content which is already encrypted in another process.
Here is my code:
class FixedRNG : public CryptoPP::RandomNumberGenerator
{
public:
FixedRNG(CryptoPP::BufferedTransformation &source) : m_source(source) {}
void GenerateBlock(byte *output, size_t size)
{
m_source.Get(output, size);
}
private:
CryptoPP::BufferedTransformation &m_source;
};
uint16_t Encrypt()
{
byte *oaepSeed = new byte[2048];
for (int i = 0; i < 2048; i++)
{
oaepSeed[i] = (byte)i;
}
CryptoPP::ByteQueue bq;
bq.Put(oaepSeed, 2048);
FixedRNG prng(bq);
Integer n("Value of N"),
e("11H"),
d("Value of D");
RSA::PrivateKey privKey;
privKey.Initialize(n, e, d);
RSA::PublicKey pubKey(privKey);
CryptoPP::RSAES_OAEP_SHA_Encryptor encryptor( pubKey );
assert( 0 != encryptor.FixedMaxPlaintextLength() );
byte blockSize = encryptor.FixedMaxPlaintextLength();
int divisionCount = fileSize / blockSize;
int proccessedBytes = 0;
// Create cipher text space
uint16_t cipherSize = encryptor.CiphertextLength( blockSize );
assert( 0 != cipherSize );
encryptor.Encrypt(prng, (byte*)plaintext, blockSize, (byte*)output);
return cipherSize;
}
void Decrypt(uint16_t cipherSize)
{
byte *oaepSeed = new byte[2048];
for (int i = 0; i < 2048; i++)
{
oaepSeed[i] = (byte)i;
}
CryptoPP::ByteQueue bq;
bq.Put(oaepSeed, 2048);
FixedRNG prng(bq);
Integer n("Value of N"),
e("11H"),
d("Value of D");
RSA::PrivateKey privKey;
privKey.Initialize(n, e, d);
//RSA::PublicKey pubKey(privKey);
CryptoPP::RSAES_OAEP_SHA_Decryptor decryptor( privKey );
byte blockSize = decryptor.FixedMaxPlaintextLength();
assert(blockSize != 0);
size_t maxPlainTextSize = decryptor.MaxPlaintextLength( cipherSize );
assert( 0 != maxPlainTextSize );
void* subBuffer = malloc(maxPlainTextSize);
CryptoPP::DecodingResult result = decryptor.Decrypt(prng, (byte*)cipherText, cipherSize, (byte*)subBuffer);
assert( result.isValidCoding );
assert( result.messageLength <= maxPlainTextSize );
}
Unfortunately, value of isValidCoding is false. I think I am misunderstanding something about RSA encryption/decryption!!
Note that, privKey and pubKey have been validated using KEY.Validate(prng, 3).
I have also tried to use RAW RSA instead of OAEP and SHA with no luck. I have tried to debug through crypto++ code, what I am suspicious about is prng variable. I think there is something wrong with it. I have also used AutoSeededRandomPool instead of FixedRNG but it didn't help. Worth to know that, if I copy the decryption code right after encryption code and execute it in Encrypt() method, everything is fine and isValidCoding is true!!
This is probably not be correct:
byte blockSize = encryptor.FixedMaxPlaintextLength();
...
encryptor.Encrypt(prng, (byte*)plaintext, blockSize, (byte*)output);
return cipherSize;
Try:
size_t maxLength = encryptor.FixedMaxPlaintextLength();
size_t cipherLength = encryptor.CiphertextLength( blockSize );
...
SecureByteBlock secBlock(cipherLength);
cipherLength = encryptor.Encrypt(prng, (byte*)plaintext, blockSize, secBlock);
secBlock.resize(cipherLength);
FixedMaxPlaintextLength returns a size_t, not a byte.
You should probably be calling CiphertextLength on plaintext.
I'm not really sure how you are just returning an uint_t from encrypt().
You might do better by starting fresh, and using an example from the Crypto++ as a starting point. I'm not sure this design is worth pursuing.
If you start over, then Shoup's Elliptic Curve Integrated Encryption Scheme (ECIES) would be a good choice since it combines public key with symmetric ciphers and authentication tags.