SOLVED: I was dumb. First argument of encrypt should have been key.size() and first argument of decrypt should have been RSA_size(myKey).
ORIGINAL QUESTION
Hey guys, I'm having some trouble figuring out how to do this.
Basically I just want a client and server to be able to send each other encrypted messages.
This is going to be incredibly insecure because I'm trying to figure this all out so I might as well start at the ground floor.
So far I've got all the keys working but encryption/decryption is giving me hell.
I'll start by saying I am using C++ but most of these functions require C strings so whatever I'm doing may be causing problems.
Note that on the client side I receive the following error in regards to decryption.
error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed
I don't really understand how padding works so I don't know how to fix it.
Anywho here are the relevant variables on each side followed by the code.
Client:
RSA *myKey; // Loaded with private key
// The below will hold the decrypted message
unsigned char* decrypted = (unsigned char*) malloc(RSA_size(myKey));
/* The below holds the encrypted string received over the network.
Originally held in a C-string but C strings never work for me and scare me
so I put it in a C++ string */
string encrypted;
// The reinterpret_cast line was to get rid of an error message.
// Maybe the cause of one of my problems?
if(RSA_private_decrypt(sizeof(encrypted.c_str()), reinterpret_cast<const unsigned char*>(encrypted.c_str()), decrypted, myKey, RSA_PKCS1_OAEP_PADDING)==-1)
{
cout << "Private decryption failed" << endl;
ERR_error_string(ERR_peek_last_error(), errBuf);
printf("Error: %s\n", errBuf);
free(decrypted);
exit(1);
}
Server:
RSA *pkey; // Holds the client's public key
string key; // Holds a session key I want to encrypt and send
//The below will hold the encrypted message
unsigned char *encrypted = (unsigned char*)malloc(RSA_size(pkey));
// The reinterpret_cast line was to get rid of an error message.
// Maybe the cause of one of my problems?
if(RSA_public_encrypt(sizeof(key.c_str()), reinterpret_cast<const unsigned char*>(key.c_str()), encrypted, pkey, RSA_PKCS1_OAEP_PADDING)==-1)
{
cout << "Public encryption failed" << endl;
ERR_error_string(ERR_peek_last_error(), errBuf);
printf("Error: %s\n", errBuf);
free(encrypted);
exit(1);
}
Let me once again state, in case I didn't before, that I know my code sucks but I'm just trying to establish a framework for understanding this.
I'm sorry if this offends you veteran coders.
Thanks in advance for any help you guys can provide!
Maybe not the only problem but: The first argument to RAS_xxxcrypt functions is the number of bytes of the buffers. sizeof(key.c_str()) does not yield the number of bytes in key, it yields the size of the type of key.c_str()'s result type, i.e. sizeof(const char*). You probably want to pass the number of chars in the string instead, which can be obtained with the size() member function.
Related
The output when I send hello.
I'm coding a C++ TCP server and I'm using a while loop to continuously get data but I think it's accepting the same thing and prints a bit of the thing that it's meant to output.
while (true)
{
char buffer[5];
if (recv(clisoc, buffer, sizeof(buffer), 0)) {
string abc = (string)buffer;
cout << abc.substr(0, sizeof(buffer));
}
}
Any help would be appreciated thanks!
You only tested the return value of recv to see if it didn't detect a normal close, so none of your subsequent code has any idea how much data you received. Likely, you are receiving data once but printing it twice. You cannot cast arbitrary data to a string, that is legal only for a pointer to an actual string.
I have a client-server application, with the server part written in C++ (Winsock) and the client part in Java.
When sending data from the client, I first send its length followed by the actual data. For sending the length, this is the code:
clientSender.print(text.length());
where clientSender is of type PrintWriter.
On the server side, the code that reads this is
int iDataLength;
if(recv(client, (char *)&iDataLength, sizeof(iDataLength), 0) != SOCKET_ERROR)
//do something
I tried printing the value of iDataLength within the if and it always turns out to be some random large integer. If I change iDataLength's type to char, I get the correct value. However, the actual value could well exceed a char's capacity.
What is the correct way to read an integer passed over a socket in C++ ?
I think the problem is that PrintWriter is writing text and you are trying to read a binary number.
Here is what PrintWriter does with the integer it sends:
http://docs.oracle.com/javase/7/docs/api/java/io/PrintWriter.html#print%28int%29
Prints an integer. The string produced by String.valueOf(int) is
translated into bytes according to the platform's default character
encoding, and these bytes are written in exactly the manner of the
write(int) method.
Try something like this:
#include <sys/socket.h>
#include <cstring> // for std::strerror()
// ... stuff
char buf[1024]; // buffer to receive text
int len;
if((len = recv(client, buf, sizeof(buf), 0)) == -1)
{
std::cerr << "ERROR: " << std::strerror(errno) << std::endl;
return 1;
}
std::string s(buf, len);
int iDataLength = std::stoi(s); // convert text back to integer
// use iDataLength here (after sanity checks)
Are you sure the endianness is not the issue? (Maybe Java encodes it as big endian and you read it as little endian).
Besides, you might need to implement receivall function (similar to sendall - as here). To make sure you receive exact number of bytes specified - because recv may receive fewer bytes than it was told to.
You have a confusion between numeric values and their ASCII representation.
When in Java you write clientSender.print(text.length()); you are actually writing an ascii string - if length is 15, you will send characters 1 (code ASCII 0x31) and 5 (code ASCII 0x35)
So you must either :
send a binary length in a portable way (in C or C++ you have hton and ntoh, but unsure in Java)
add a separator (newline) after the textual length from Java side and decode that in C++ :
char buffer[1024]; // a size big enough to read the packet
int iDataLength, l;
l = recv(client, (char *)&iDataLength, sizeof(iDataLength), 0);
if (l != SOCKET_ERROR) {
buffer[l] = 0;
iDataLength = sscanf(buffer, "%d", &iDataLength);
char *ptr = strchr(buffer, '\n');
if (ptr == NULL) {
// should never happen : peer does not respect protocol
...
}
ptr += 1; // ptr now points after the length
//do something
}
Java part should be : clientSender.println(text.length());
EDIT :
From Remy Lebeau's comment, There is no 1-to-1 relationship between sends and reads in TCP. recv() can and does return arbitrary amounts of data, so you cannot assume that a single recv() will read the entire line of text.
Above code should not do a simple recv but be ready to concatenate multiple reads to find the separator (left as exercise for the reader :-) )
I am developing an encrypted version of a realtime communication application. The issue I have is, that the encrypted data pakets sent to the receiver are faulty. An example from the error log: (hex encoded data, the original data is pure byte code).
sent: 262C1688215232656B5235B691826A21C51D37A99413050BAEADB81D8892493FC0DB519250199F5BE73E18F2703946593C4F6CEA396A168B3313FA689DE84F380606ED3C322F2ADFC561B9F1571E29DF5870B59D2FCF497E01D9CD5DFCED743559C3EE5B00678966C8D73EA3A5CD810BB848309CDF0F955F949FDBA618C401DA70A10C36063261C5DBAB0FC0F1
received: 262C1688215232656B5235B691826A21C51D37A99413050BAEADB81D8892493FC0DB519250199F5BE73E18F2703946593C4F6CEA396A168B3313FA689DE84F380606ED3C322F2ADFC561B9F1571E29DF5870B59D2FCF497E01D9CD5DFCED743559C3EE5B00CDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCD
This is the call of the send-method:
string encSendBuffer = sj->cipherAgent->encrypt(sj->dFC->sendBuffer, sj->dFC->sendBytes);
char* newSendBuffer = new char[encSendBuffer.length() + 1];
strcpy(newSendBuffer, encSendBuffer.c_str());
sj->dFC->s->async_send_to(boost::asio::buffer(newSendBuffer, encSendBuffer.length()),
*sj->dFC->f,
boost::bind(&sender::sendHandler, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
)
sj->dFC->s is a UDP-Socket and sj->dFC->f is an UDP Endpoint.
The error code of the sendHandler is always system: 0
This is how I do the encryption using the Crypto++ library: (extract)
string cipherEngine::encrypt(char* input, int length)
{
string cipher = "";
CTR_Mode<AES>::Encryption e;
e.SetKeyWithIV(key, keyLength, iv);
ArraySource as((byte*)input, length, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
)
);
return cipher;
}
UPDATE: Code of the receive function:
void receiver::receive(){
int maxLength = 4096;
sj->dFC->s->async_receive_from(boost::asio::buffer(input,maxLength),
senderEndpoint,
boost::bind(&receiver::handleReceiveFrom, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
After the Data is received, it is stored in the char buffer input and decrypted in the handleReceiveFrom function.
Without encryption everything is fine. The number of bytes that are sended is always correct, on receiver side too. The length of de "CD"- blocks are quite random. I already checked the encryption and the decrypted data is the same as the original plain text.
Does any know where this behavior comes from?
The key here is that the erroneous data begins after the first null (0x00) value in your encrypted data array. The following line:
strcpy(newSendBuffer, encSendBuffer.c_str());
...looks like it's only copying up to the data until that null byte into newSendBuffer. The send function is sending that buffer contents just fine; the buffer just doesn't have the data you expect. You'll need to load newSendBuffer in a different way, not using strcpy(), that can handle null bytes. Try std::memcpy().
Thank you Joachim Pileborg and Jack O'Reilly! You are right indeed.
I changed my code from strcpy(newSendBuffer, encSendBuffer.c_str());
to
for (int i = 0; i < encSendBuffer.length(); i++)
{
newSendBuffer[i] = encSendBuffer.at(i);
}
on sender and receiver side. It actually solved the problem. It is quite naive code but it does what it should.
std::memcpy() seems to be much more elegant and i will try it out.
Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long.
I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum.
I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck.
header segment:
typedef struct KEYS {
RSA *serv;
char* serv_pub;
int pub_size;
RSA *clnt;
} KEYS;
KEYS keys;
Initializing function:
// Generates and validates the servers key
/* code for generating server RSA left out, it's working */
//Set client exponent
keys.clnt = 0;
keys.clnt = RSA_new();
BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent
Problem code (in Network::server_handshake):
// *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)*
cout << "Assigning clients RSA" << endl;
// I have verified that 'buffer' contains the proper key
if (BN_hex2bn(&keys.clnt->n, buffer) < 0) {
Error("ERROR reading server RSA");
}
cout << "clients RSA has been assigned" << endl;
The program segfaults at
BN_hex2bn(&keys.clnt->n, buffer)
with the error (valgrind output)
Invalid read of size 8
at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8)
by 0x40F23E: Network::server_handshake() (Network.cpp:177)
by 0x40EF42: Network::startNet() (Network.cpp:126)
by 0x403C38: main (server.cpp:51)
Address 0x20 is not stack'd, malloc'd or (recently) free'd
Process terminating with default action of signal 11 (SIGSEGV)
Access not within mapped region at address 0x20
at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8)
And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!
RSA_new() only creates the RSA struct, it does not create any of the bignum objects inside that struct, like the n and e fields. You must create these yourself using BN_new(), or more likely you need to find the right openssl function to generate or read in your RSA key.
I am doing some socket stuff on Symbian, which works fine so far. However,
I am facing a weird problem when trying to read out the data that has been sent.
Assume the code looks as follows:
TSockXfrLength len;
iSocket.RecvOneOrMore( buff, 0, iStatus, len );
User::WaitForRequest(iStatus);
if (iStatus == KErrNone)
{
printf(_L("Bytes received 1st try %4d..."), len);
printf(_L("Bytes Length received 2nd try %4d..."), &len);
}
Output in both cases is something with 7450 although I received exactly 145 bytes.
I can check that with a network analyser. Anyone knows what I am doing wrong here that
I do not get the proper bytes that have been received?
EDIT:
I am connecting to the socket in the following way:
TInetAddr serverAddr;
TUint iPort=445;
TRequestStatus iStatus;
TSockXfrLength len;
TInt res = iSocketSrv.Connect();
res = iSocket.Open(iSocketSrv,KAfInet,KSockStream, KProtocolInetTcp);
serverAddr.SetPort(iPort);
serverAddr.SetAddress(INET_ADDR(192,100,81,54));
iSocket.Connect(serverAddr,iStatus);
User::WaitForRequest(iStatus);
Hope that helps ;)
Thanks
Try
printf(_L("Bytes received 1st try %4d..."), len());
The TSockXfrLength type is actually a typedef of TPckgBuf<TInt>. This is the Symbian Descriptor way of storing arbitrary simple data in a 8-bit descriptor. To retrieve the value from the len object you need to use the () operator.
More information about TPckg* classes are available in the symbian developer library.