I run a game server, i got main server that handles all packets sent from players (clients).
I want to encrypt my packets via AES so that each packet should be unique (i think i need IV here) and that server should accept each encrypted packet one time only, so that if someone tried to sniff a packet he cannot send it again.
How do I do this?
P.S i code server and client in c++
You could use some kind of SSL API like OpenSSL, but this may be overkill in your scenario as you would need to use certificates etc. There is a open source Rijndael (the algorithm that AES uses) C++ implementation here.
Here is an example of its usage:
void testEncryptBlock()
{
const int nCharacters = 16;
char szHex[33];
char *EncryptedData = new char[nCharacters + 1];
CRijndael rijndael;
int result = rijndael.MakeKey("abcdefghabcdefghabcdefghabcdefgh");
// Add some dummy data for the sake of the demo
EncryptedData = (char*)memset(EncryptedData, 255, nCharacters); // 0xfffff...
result = rijndael.EncryptBlock(EncryptedData);
... // Do something with the data
delete [] EncryptedData;
}
Related
I'm writing code to send raw Ethernet frames between two Linux boxes. To test this I just want to get a simple client-send and server-receive.
I have the client correctly making packets (I can see them using a packet sniffer).
On the server side I initialize the socket like so:
fd = socket(PF_PACKET, SOCK_RAW, htons(MY_ETH_PROTOCOL));
where MY_ETH_PROTOCOL is a 2 byte constant I use as an ethertype so I don't hear extraneous network traffic.
when I bind this socket to my interface I must pass it a protocol again in the socket_addr struct:
socket_address.sll_protocol = htons(MY_ETH_PROTOCOL);
If I compile and run the code like this then it fails. My server does not see the packet. However if I change the code like so:
socket_address.sll_protocol = htons(ETH_P_ALL);
The server then can see the packet sent from the client (as well as many other packets) so I have to do some checking of the packet to see that it matches MY_ETH_PROTOCOL.
But I don't want my server to hear traffic that isn't being sent on the specified protocol so this isn't a solution. How do I do this?
I have resolved the issue.
According to http://linuxreviews.org/dictionary/Ethernet/ referring to the 2 byte field following the MAC addresses:
"values of that field between 64 and 1522 indicated the use of the new 802.3 Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the original DIX or Ethernet II frame format with an EtherType sub-protocol identifier."
so I have to make sure my ethertype is >= 0x0600.
According to http://standards.ieee.org/regauth/ethertype/eth.txt use of 0x88b5 and 0x88b6 is "available for public use for prototype and vendor-specific protocol development." So this is what I am going to use as an ethertype. I shouldn't need any further filtering as the kernel should make sure to only pick up ethernet frames with the right destination MAC address and using that protocol.
I've worked around this problem in the past by using a packet filter.
Hand Waving (untested pseudocode)
struct bpf_insn my_filter[] = {
...
}
s = socket(PF_PACKET, SOCK_DGRAM, htons(protocol));
struct sock_fprog pf;
pf.filter = my_filter;
pf.len = my_filter_len;
setsockopt(s, SOL_SOCKET, SO_ATTACH_FILTER, &pf, sizeof(pf));
sll.sll_family = PF_PACKET;
sll.sll_protocol = htons(protocol);
sll.sll_ifindex = if_nametoindex("eth0");
bind(s, &sll, sizeof(sll));
Error checking and getting the packet filter right is left as an exercise for the reader...
Depending on your application, an alternative that may be easier to get working is libpcap.
Is it right method client send data using the same connection accepted by server?.
The situation is like this, I have blue tooth server running on my PC and on the other side I have android phone with client and server. From android side the client start connection. I am using blue-tooth chat example from android samples.
And the server-client on android look like
BluetoothSocket socket;
InputStream tmpIn = null;
OutputStream tmpOut = null;
// Get the BluetoothSocket input and output streams
tmpIn = socket.getInputStream();
tmpOut = socket.getOutputStream();
And in the PC side I am using Bluez libraries to implement server and client.
The code includes blue tooth receive thread and a main thread, whenever the server accept a connection from android phone I just assign the socket value to a global variable and whenever the client need to send data it send using the same socket ,
Server:-
int GLOBAL_CLIENT;
void* recive_bluetooth_trd(void*)
{
...............................
..............................
client = accept(s, (struct sockaddr *)&rem_addr, &opt);
GLOBAL_CLIENT=client;
while(1){
bytes_read = read(client, buf, sizeof(buf));
....................
...................
}
Client:-
void clinet(char *msg, int length){
........................
int bytes_write=write(GLOBAL_CLIENT,message, length);
..........................
}
My question is, Is it a right method ? The problem is that some times the client send data successfully from PC but not receiving on android side.
The biggest problem I see is that you won't ever leave your while(1) loop, even when the client disconnects. Read will return immediately forever with 0 bytes read (check for a return code of <= 0), trying to signal that the socket is disconnected. Your code will go into a tight infinite loop and use up all the CPU resources it can get its single-threaded hands on.
You need to make sure you ALWAYS check your socket and IO return codes and handle the errors correctly. The error handling for sockets is usually about 3x the actual socket code.
Unless of course the .......... stuff is the important bits. Always tough to tell when people hide code relevant to the question they are asking.
Seems correct to me, but after read you have to NUL ('\0') terminate your buffer if you are treating with strings:
buf[bytes_read] = '\0';
I am attempting to use boost asio for serial communication. I am currently working in Windows, but will eventually be moving the code into Linux. When ever I restart my computer data sent from the program is not what it should be (for example I send a null followed by a carriage return and get "00111111 10000011" in binary) and it is not consistent (multiple nulls yield different binary).
However, as soon as I use any other program to send any data to the serial port and run the program again it works perfectly. I think I must be missing something in the initialization of the port, but my research has not turned anything up.
Here is how I am opening the port:
// An IOService to get the socket to work
boost::asio::io_service *io;
// An acceptor for getting connections
boost::shared_ptr<boost::asio::serial_port> port;
// Cnstructor Functions
void Defaults() {
io = new boost::asio::io_service();
// Set Default Commands
command.prefix = 170;
command.address = 3;
command.xDot[0] = 128;
command.xDot[1] = 128;
command.xDot[2] = 128;
command.throtle = 0;
command.button8 = 0;
command.button16 = 0;
command.checkSum = 131;
}
void Defaults(char * port, int baud) {
Defaults();
// Setup the serial port
port.reset(new boost::asio::serial_port(*io,port));
port->set_option( boost::asio::serial_port_base::baud_rate( baud ));
// This is for testing
printf("portTest: %i\n",(int)port->is_open());
port->write_some(boost::asio::buffer((void*)"\0", 1));
boost::asio::write(*port, boost::asio::buffer((void*)"\0", 1));
boost::asio::write(*port, boost::asio::buffer((void*)"\r", 1));
boost::asio::write(*port, boost::asio::buffer((void*)"\r", 1));
Sleep(2000);
}
Edit: In an attempt to remove unrelated code I accidentally deleted the the line where I set the baud rate, I added it back. Also, I am checking the output with a null-modem and Docklight. Aside from the baud rate I am using all of the default serial settings specified for a boost serial port (I have also tried explicitly setting them with no effect).
You haven't said how you're checking what's being sent, but it's probably a baud rate mismatch between the two ends.
It looks like you're missing this:
port->set_option( boost::asio::serial_port_base::baud_rate( baud ) );
Number of data bits, parity, and start and stop bits will also need configuring if they're different to the default values.
If you still can't sort it, stick an oscilloscope on the output and compare the waveform of the sender and receiver. You'll see something like this.
This is the top search result when looking for this problem, so I thought I'd give what I believe is the correct answer:
Boost Asio is bugged as of this writing and does not set default values for ports on the Windows platform. Instead, it grabs the current values on the port and then bakes them right back in. After a fresh reboot, the port hasn't been used yet, so its Windows-default values are likely not useful for communication (byte size typically defaults to 9, for example). After you use the port with a program or library that sets the values correctly, the values Asio picks up afterward are correct.
To solve this until Asio incorporates the fix, just set everything on the port explicitly, ie:
using boost::asio::serial_port;
using boost::asio::serial_port_base;
void setup_port(serial_port &serial, unsigned int baud)
{
serial.set_option(serial_port_base::baud_rate(baud));
serial.set_option(serial_port_base::character_size(8));
serial.set_option(serial_port_base::flow_control(serial_port_base::flow_control::none));
serial.set_option(serial_port_base::parity(serial_port_base::parity::none));
serial.set_option(serial_port_base::stop_bits(serial_port_base::stop_bits::one));
}
The way my game and server work is like this:
I send messages that are encoded in a format I created. It starts with 'p' followed by an integer for the message length then the message.
ex: p3m15
The message is 3 bytes long. And it corresponds to message 15.
The message is then parsed and so forth.
It is designed for TCP potentially only sending only 1 byte (since TCP only has to send a minimum of 8 bits).
This message protocol I created is extremely lightweight and works great which is why I use it over something like JSON or other ones.
My main concern is, how should the client and the server start talking?
The server expects clients to send messages in my format. The game will always do this.
The problem I ran into was when I tested my server on port 1720. There was BitTorrent traffic and my server was picking it up. This was causing all kinds of random 'clients' to connect to my server and sending random garbage.
To 'solve' this, I made it so that the first thing a client must send me is the string "Hello Server".
If the first byte ever sent is != 'H' or if they have sent me > 12 bytes and it's != "Hello Server" then I immediately disconnect them.
This is working great. I'm just wondering if I'm doing something a bit naive or if there are more standard ways to deal with:
-Clients starting communication with server
-Clients passing Hello Server check, but somewhere along the line I get an invalid message. I can assume that my app will never send an invalid message. If it did, it would be a bug. Right now if I detect an invalid message then I disconnect the client.
I noticed BitTorrent was sending '!!BitTorrent Protocol' before each message. Should I do something like that?
Any advice on this and making it safer and more secure would be very helpful.
Thanks
perhaps a magic number field embedded in your message.
struct Message
{
...
unsigned magic_number = 0xbadbeef3;
...
};
so first thing you do after receive something, is checking whether the magic_number field is 0xbadbeef3.
Typically, I design protocols with a header something like this:
typedef struct {
uint32_t signature;
uint32_t length;
uint32_t message_num;
} header_t;
typedef struct {
uint32_t foo;
} message13_t;
Sending a message:
message13_t msg;
msg.foo = 0xDEADBEEF;
header_t hdr;
hdr.signature = 0x4F4C494D; // "MILO"
hdr.length = sizeof(message13_t);
hdr.message_num = 13;
// Send the header
send(s, &hdr, sizeof(hdr), 0);
// Send the message data
send(s, &msg, sizeof(msg), 0);
Receiving a message:
header_t hdr;
char* buf;
// Read the header - all messages always have this
recv(s, &hdr, sizeof(hdr), 0);
// allocate a buffer for the rest of the message
buf = malloc(hdr.length);
// Read the rest of the message
recv(s, buf, hdr.length, 0);
This code is obviously devoid of error-checking or making sure all data has been sent/received.
I have some legacy code that uses OpenSSL for communication. Just like any other session it does a handshake using the SSL functions and then encrypted communication over TCP. We recently changed our code to use IO completion ports. The way it works is contrary to that of OpenSSL. Basically, I'm having a hard time migrating our secure communication code from full OpenSSL usage to IOCP sockets and OpenSSL encryption.
Does anyone have/anyone know of any references that might help me with such a task?
How would TLS handshaking work over IOCP?
In order to use OpenSSL for encryption, but do your own socket IO, what you basically do is create a memory BIO, that you read and write socket data into as that becomes available, and attach that to the SSL context.
Each time you do a SSL_write call, you follow up with a call to the memory BIO to see if it has data in its read buffer, read that out and send it.
Conversely, when data arrives on the socket via your io completion port mechanism, you write it to the BIO and call SSL_read to read the data out. SSL_read might return an error code indicating its in a handshake, which usually means its generated more data to write - which you handle by reading the memory BIO again.
To create my SSL session, I do this:
// This creates a SSL session, and an in, and an out, memory bio and
// attaches them to the ssl session.
SSL* conn = SSL_new(ctx);
BIO* bioIn = BIO_new(BIO_s_mem());
BIO* bioOut = BIO_new(BIO_s_mem());
SSL_set_bio(conn,bioIn,bioOut);
// This tells the ssl session to start the negotiation.
SSL_set_connect_state(conn);
As I receive data from the network layer:
// buf contains len bytes read from the socket.
BIO_write(bioIn,buf,len);
SendPendingHandshakeData();
TryResendBufferedData(); // see below
int cbPlainText;
while( cbPlainText = SSL_read(ssl,&plaintext,sizeof(plaintext)) >0)
{
// Send the decoded data to the application
ProcessPlaintext(plaintext,cbPlaintext);
}
As I receive data from the application to send - you need to be prepared for SSL_write to fail because a handshake is in progress, in which case you buffer the data, and try and send it again in the future after receiving some data.
if( SSL_write(conn,buf,len) < 0)
{
StoreDataForSendingLater(buf,len);
}
SendPendingHandshakeData();
And SendPendingHandshakeData sends any data (handshake or ciphertext) that SSL needs to send.
while(cbPending = BIO_ctrl_pending(bioOut))
{
int len = BIO_read(bioOut,buf,sizeof(buf));
SendDataViaSocket(buf,len); // you fill this in here.
}
Thats the process in a nutshell. The code samples arn't complete as I had to extract them from a much larger library, but I believe they are sufficient to get one started with this use of SSL. In real code, when SSL_read/write / BIO_read/write fail, its probably better to call SSL_get_error and decide what to do based on the result: SSL_ERROR_WANT_READ is the important one and means that you could not SSL_write any more data, as it needs you to read and send the pending data in the bioOut BIO first.
You should look into Boost.Asio