EPP Server SSL_Read hang after greeting - c++

I have strange problems in ssl_read/ssl_write function with EPP server
After connected I read greeting message successfully.
bytes = SSL_read(ssl, buf, sizeof(buf)); // get reply & decrypt
buf[bytes] = 0;
ball+= bytes;
cc = getInt(buf);
printf("header: %x\n",cc);
printf("Received: \"%s\"\n",buf+4);
First 4 bytes are 00, 00, 09, EB and read 2539 bytes in greeting message.
After that, all operations like hello or logins are hand when SSL_read();
xml= "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?><eppxmlns=\"urn:ietf:params:xml:ns:epp-1.0\"><hello/></epp>";
char bb[1000] = {0};
makeChar(strlen(xml)+4, bb);
memcpy(bb+4, xml, strlen(xml)+4);
bytes = SSL_write(ssl,xml,strlen(xml)+4);
usleep(500000); //sleep 0.5 sec
memset(buf, 0, 1024);
printf("read starting.\n");
bytes = SSL_read(ssl, buf, 1024); //always hang here
buf[bytes]=0;
printf("%d : %s", bytes, buf);
I am confused. I read RFC documentations but I can not find answer.
in EPP documentation, they said "In order to verify the identity of the secure server you will need the ‘Verisign Class 3 Public Primary Certification Authority’ root certificate available free from www.verisign.com".
is it important?

is it important?
Yes, as outlined in RFC 5734 "Extensible Provisioning Protocol (EPP) Transport over TCP", the whole security of an EPP exchange is bound to 3 properties:
access list based on IP address
TLS communication and verification of certificates (mutually, which is why you - as registrar aka client in EPP communication - have often to send in advance the certificate you will use ot the registry)
the EPP credentials used at <login> command.
Failure to properly secure the connection can mean:
you as registrar sending confidential information (your own EPP login, various details on domains you sponsor or not, including <authInfo> values, etc.) to a third party not being the registry
and in reverse, someone mimicking you in the eyes of the registry hence doing operations on which you will have to get the burden of, including financially for all domains bought, and legally.
But even in general for all cases of TLS handshake, if you want to be sure to be connected, as client, to the server you think you are, you need to verify its certificate.
Besides trivial things (dates, etc.), the certificate:
should at least be signed by an AC you trust (your choice who you trust)
and/or is a specific certificate with specific fingerprint/serial and other characteristics (but you will have to maintain that when the other party changes its certificate)
and/or matches DNS TLSA records
In short, if you are new to both EPP and TLS and C/C++ (as you state yourself in your other question about Verisign certificate), I hugely recommend you do not try to do all of this by yourself at a so low level (for example you should never manipulate XML as you do above, it shouldn't be a string. Again, there are libraries to properly parse and generate XML documents). You should use an EPP library that leverage most of the things for you. Your registry may provide an "SDK" that you can use, you should ask it.
PS: your read is probably hanging because you are not sending the payload in the correct fashion (again, something an EPP library will do for you). You need to send the length, as 4 bytes (which you need to compute after converting your string to bytes using the UTF-8 encoding), and then the payload itself. I am not sure this is what your code does. Also your reading part is wrong: you should first read 4 bytes from server, this will give you the length (but do note they can theoretically arrive not necessarily in a single packet so one "ssl read" might not give all 4 of them, you need a loop), after which you know the length of the payload you will get which allows you to set up proper buffers, if needed, as well as detecting properly when you received everything.

Related

get the binary data transferred from grpc client

I am new to gRPC framework, and I have created a sample client-server on my PC (referring to this).
In my client-server application I have implemented a simple RPC
service NameStudent {
rpc GetRoll(RollNo) returns (Details) {}
}
The client sends a RollNo and receives his/her details which are name, age, gender, parent name, and roll no.
message RollNo{
int32 roll = 1;
}
message Details {
string name = 1;
string gender = 2;
int32 age = 3;
string parent = 4;
RollNo rollid = 5;
}
The actual server and client codes are adaptation of the sample code explained here
Now my server is able to listen to "0.0.0.0:50051(address:port)" and client is able to send the roll no on "localhost:50051" and receive the details.
I want to see the actual binary data that is transferred between client and server. i have tried using Wireshark, but I don't understand what I am seeing here.
Here is the screenshot of wireshark capture
And here are the details of highlighted entry from above screenshot.
Need help in understanding wireshark here, Or any other way that can be used to see the binary data.
Wireshark uses the port to determine how to decode the communication, and it doesn't know any protocol associated with 50051. So you need to configure it to treat this as HTTP.
Right click on a row and select "Decode As..." in the context menu.
Then set "Current" to "HTTP" or "HTTP2" (HTTP will generally auto-detect HTTP2) and hit "OK".
Then the HTTP/2 frames should be decoded. And if using a recent version of Wireshark, you may also see the gRPC frames decoded.
The whole idea of grpc is to HIDE that. Let's say we ignore that and you know what you're doing.
Look at https://en.wikipedia.org/wiki/Protocol_Buffers. gRPC uses Protocol Buffers for it's data representation. You might get a hint at the data you're seeing.
Two good starting points for a reverse engineer exercise are:
Start simple: compile a program that sends an integer. Understand it. Sniff it. Then compile a program that sends a string. Try several values. Once you understand it, pass to tacke the problem of understanding how's google sending your structure.
Use known data and do small variations: knowing what 505249... means is easier if you start knowing the data you're sending (as an example, send "Hello world" string; then change it to "Hella world"; see what changes on the coded sniff; also check that sending several times the same data produces the same sniffed output). Apply prior point: start simple, first empty string, then " ", then "a", then "b", etc. and then pass to complex and larger strings. Don't be affraid to start simple.

How to copy int to u_char*

I have a u_char* dynamic array having binary data of some network packet. I want to change the destination port number in the packet with some integer value. Suppose that the port number offset within the packet is ofs, with length of 4 bytes.
I tried the following 2 methods:
u_char* packet = new u_char[packet_size]; // Packet still empty
// Read packet from network ...
int new_port = 1234;
Method #1:
std::copy((u_char*)&new_port, (u_char*)&new_port+4, packet+ofs);
Method #2:
std::string new_port_str = std::to_string(new_port);
auto new_port_bytes = new_port_str.c_str();
std::copy(new_port_bytes, new_port_bytes+4, packet+ofs);
Both methods give garbage value for port number (but the rest of the packet is OK). Could anyone help me ?
You have to convert the integer from whatever internal representation your platform happens to use to the format the particular network protocol you're using requires them to be in when sent over the network.
This depends on the particular network protocol you're trying to use -- check its documentation for precisely the format it requires ports to be expressed in. My bet will be it's network byte order. You probably have functions like htons to convert shorts to network byte order.
Another problem -- how many bytes is int on your platform? How many bytes does the network protocol use to express ports? I'll bet the numbers are 4 and 2 respectively. So that's another issue. (Or maybe it isn't. I don't know for sure how many bytes an int is on your platform nor do I know what protocol you're trying to work with, so I have to guess.)
You can't just write code randomly and expect it to work. You have to think about what you're trying to do and understand the requirements.
My recommendation would be to look at the specification for the network protocol you're working with and figure out exactly which bytes in the data have to change and what they have to change to. Then write code to change each byte to the correct value according to the network protocol specification. This will ensure your code works correctly on any platform.

Serialize and deserialize the message using google protobuf in socket programming in C++

Message format to send to server side as below :
package test;
message Test {
required int32 id = 1;
required string name = 2;
}
Server.cpp to do encoding :
string buffer;
test::Test original;
original.set_id(0);
original.set_name("original");
original.AppendToString(&buffer);
send(acceptfd,buffer.c_str(), buffer.size(),0);
By this send function it will send the data to client,i hope and i am not getting any error also for this particular code.
But my concern is like below:
How to decode using Google Protocol buffer for the above message in
the client side
So that i can see/print the message.
You should send more than just the protobuf message to be able to decode it on the client side.
A simple solution would be to send the value of buffer.size() over the socket as a 4-byte integer using network byte order, and the send the buffer itself.
The client should first read the buffer's size from the socket and convert it from network to host byte order. Let's denote the resulting value s. The client must then preallocate a buffer of size s and read s bytes from the socket into it. After that, just use MessageLite::ParseFromString to reconstruct your protobuf.
See here for more info on protobuf message methods.
Also, this document discourages the usage of required:
You should be very careful about marking fields as required. If at
some point you wish to stop writing or sending a required field, it
will be problematic to change the field to an optional field – old
readers will consider messages without this field to be incomplete and
may reject or drop them unintentionally. You should consider writing
application-specific custom validation routines for your buffers
instead. Some engineers at Google have come to the conclusion that
using required does more harm than good; they prefer to use only
optional and repeated. However, this view is not universal.

How to determine length of buffer at client side

I have a server sending a multi-dimensional character array
char buff1[][3] = { {0xff,0xfd,0x18} , {0xff,0xfd,0x1e} , {0xff,0xfd,21} }
In this case the buff1 carries 3 messages (each having 3 characters). There could be multiple instances of buffers on server side with messages of variable length (Note : each message will always have 3 characters). viz
char buff2[][3] = { {0xff,0xfd,0x20},{0xff,0xfd,0x27}}
How should I store the size of these buffers on client side while compiling the code.
The server should send information about the length (and any other structure) of the message with the message as part of the message.
An easy way to do that is to send the number of bytes in the message first, then the bytes in the message. Often you also want to send the version of the protocol (so you can detect mismatches) and maybe even a message id header (so you can send more than one kind of message).
If blazing fast performance isn't the goal (and you are talking over a network interface, which tends to be slower than computers: parsing may be cheap enough that you don't care), using a higher level protocol or format is sometimes a good idea (json, xml, whatever). This also helps with debugging problems, because instead of debugging your custom protocol, you get to debug the higher level format.
Alternatively, you can send some sign that the sequence has terminated. If there is a value that is never a valid sequence element (such as 0,0,0), you could send that to say "no more data". Or you could send each element with a header saying if it is the last element, or the header could say that this element doesn't exist and the last element was the previous one.

Embedding Mongoose Web Server in C++

I just embedded the Mongoose Web Server into my C++ dll (just a single header and recommended in most of the stack overflow threads) and I have it up and running properly with the very minimal example code.
However, I am having a rough time finding any sort of tutorials, examples, etc. on configuring the very basic necessities of a web server. I need to figure out the following...
1) How to allow directory browsing
2 Is it possible to restrict download speeds on the files?
3) Is it possible to have a dynamic list of IPs addresses allowed to download files?
4) How to allow the download of specific file extensions (.bz2 in this case) ANSWERED
5) How to bind to a specific IP Address ANSWERED
Most of the information I have found is in regards to using the pre-compiled binary release, so I am a bit stumped right now. Any help would be fantastic!
1) "enable_directory_listing" option
2) Not built into Mongoose (at least not the version I have, which is about 6 months old). [EDIT:] Newer versions of Mongoose support throttling download speed. From the manual...
Limit download speed for clients. throttle is a comma-separated list
of key=value pairs, where key could be:
* limit speed for all connections
x.x.x.x/mask limit speed for specified subnet
uri_prefix_pattern limit speed for given URIs
The value is a floating-point number of bytes per second, optionally
followed by a k or m character, meaning kilobytes and megabytes
respectively. A limit of 0 means unlimited rate. The last matching
rule wins. Examples:
*=1k,10.0.0.0/8=0 limit all accesses to 1 kilobyte per second,
but give connections from 10.0.0.0/8 subnet
unlimited speed
/downloads/=5k limit accesses to all URIs in `/downloads/` to
5 kilobytes per secods. All other accesses are unlimited
3) "access_control_list" option. In the code accept_new_connection calls check_acl that compares the client's IP to a list of IPs to accept and/or ignore. From the manual...
Specify access control list (ACL). ACL is a comma separated list of IP
subnets, each subnet is prepended by '-' or '+' sign. Plus means
allow, minus means deny. If subnet mask is omitted, like "-1.2.3.4",
then it means single IP address. Mask may vary from 0 to 32 inclusive.
On each request, full list is traversed, and last match wins. Default
setting is to allow all. For example, to allow only 192.168/16 subnet
to connect, run "mongoose
-0.0.0.0/0,+192.168/16". Default: ""
http://code.google.com/p/mongoose/wiki/MongooseManual
Of course as soon as I give up and post, I find most of the answers were right in front of my face. Here is the options for them...
const char *options[] =
{
"document_root", "C:/",
"listening_ports", "127.0.0.1:8080",
"extra_mime_types", ".bz2=plain/text",
NULL
};
However, I am still not sure how to make enable directory browsing. Right now, my callback function is just the basic one out of the example (as seen below). What would I need to do to get it so the files are listed?
static void *callback(enum mg_event event, struct mg_connection *conn, const struct mg_request_info *request_info)
{
if (event == MG_NEW_REQUEST)
{
// Echo requested URI back to the client
mg_printf(conn, "HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n\r\n"
"%s", request_info->uri);
return ""; // Mark as processed
}
else
{
return NULL;
}
}