I have read the SSL TUNNELING INTERNET-DRAFT of December 1995 and set up an HTTP transparent proxy that works perfectly with unencrypted traffic.
Having read the above, as well as googled my brains out, the accepted method to create a tunnel for secure traffic through a proxy seems to be:
connect to the requested host, then have the proxy send an "HTTP 200..." confirmation message back to the client, then from that point on simply pass all further data traffic between client and server.
When I try this, however, the client (Chrome browser) responds to the "HTTP 200..." message with three wingdings characters which I forward to the remote host. At this point there is no response back and the connection fails.
Here is the code I am using for this, after having connected to the host:
if((*request=='C')&&(*(request+1)=='O')&&(*(request+2)=='N')&&(*(request+3)=='N'))
{
int recvLen;
send(output,htok,strlen(htok),0); //htok looks like "HTTP/1.0 200 Connection Established\nProxy-Agent: this_proxy\r\n\r\n"
std::memset(buff,0,bSize);
int total;
int bytes;
int n;
char cdata[MAXDATA];
while ((recvLen = recv(output, buff, bSize-1,0)) > 0) //recving from client - here we get wingdings
{
memset(cdata,0, MAXDATA);
strcat(cdata, buff);
while(recvLen>=bSize-1)//just in case buff is too small
{
std::memset(buff,0,bSize);
recvLen=recv(output,buff,bSize-1,0);
strcat(cdata, buff);
}
total = 0;
bytes = strlen(cdata);
cout << cdata << endl;//how I see the wingdings
while (total < strlen(cdata))
{
n = send(requestSock, cdata + total, bytes,0);//forwarding to remote host
if(n == SOCKET_ERROR)
{
cout << "secure sending error" << endl;
break;
}
total += n;
bytes -= n;
}
std::memset(buff,0,bSize);
recvLen=recv(requestSock, buff, bSize,0);//get reply from remote host
if (recvLen > 0)
{
do
{
cout<<"Thread "<<threadid<<" [Connection:Secure]: "<<recvLen<<endl;
send(output, buff, recvLen,0);//forward all to client
recvLen= recv(requestSock, buff, bSize,0);
if(0==recvLen || SOCKET_ERROR==recvLen)
{
cout<<"finished secure receiving or socket error"<<endl;
break;
}
}while(true);
}
}//end while, loop checks again for client data
Can anyone spot the error of my ways?
Your code is much more complicated than necessary. Just read into a char array, save the length returned, and write that many bytes from the same array, in a loop until recv() returns zero. 4 lines of code including two for the braces. Don't try to assemble the entire incoming message, just relay whatever comes in as it comes. Otherwise you are just adding latency, and programming errors. Get rid of all the strXXX() calls altogether.
I don't think you should make the assumption that the traffic does not contain ASCII NUL characters:
strcat(cdata, buff);
}
total = 0;
bytes = strlen(cdata);
If there are ASCII NULs in the stream, these will fail.
Related
I have implemented a secure websocket client (with OpenSSL encryption) in C++. The problem is, for received messages with size above 16k bytes, the client does not receive the ping message from server separately, rather the ping message is appended at the tail end of the preceding data message. As a result, the client does not parse the ping message, does not send a PONG reply and server closes the connection on timeout.
I am using a python based websocket server to test my client application. As far as I have tested, if the packet size sent from server is below 16k bytes, the pings are correctly received consistently during the test.
This is my read_from_websocket function
int read_from_websocket(char *recv_buff)
{
uint buf_capacity = INT_MAX;
uint buf_offset = 0;
uint read_count;
do
{
read_count = ssl_read(recv_buff + buf_offset, buf_capacity);
if (read_count == P_FD_ERR) // P_FD_ERR = -1
{
return -1;
}
else if (read_count == P_FD_PENDING) // P_FD_PENDING = -2
{
break;
}
if (read_count == 0)
{
break; // EOF
}
buf_offset += read_count;
buf_capacity -= read_count;
} while (buf_capacity);
return INT_MAX - buf_capacity;
}
/**SSL read function being called by the above websocket read function**/
int ssl_read(void *buf, size_t count)
{
int len = SSL_read(ssl, buf, count);
if (len < 0)
{
int err = SSL_get_error(ssl, len);
if (err == SSL_ERROR_WANT_READ)
{
return P_FD_PENDING;
}
else
{
return P_FD_ERR;
}
}
return len;
}
I don't understand:
how to ensure that a single call to read_from_websocket() always returns only one complete message
why does this only happen in case of incoming ping messages and only when the preceding message is greater than 16k bytes long
whether the server side can be a culprit here since I'm using an off-the-shelf server code
This question already has an answer here:
Differ between header and content of http server response (sockets)
(1 answer)
Closed 1 year ago.
I'm making this socket HTTP client (very basic). When recv()'ing response data from example.com it works fine and writes it all to a buffer but when I try to revc any bigger amounts of data it stops at around 1500 bytes.
Right now all I'm trying to do is get the response written into the buffer (headers and all). Not trying to parse anything. But that isn't working. It works for a few iterations but then stops or hangs. I'm asking for help identifying the issue with this receive_response() function that causes these behaviors.
This is the function that revc's the HTTP response:
void tcp_client::receive_response(char *buffer) {
int bytes_recv = 0;
int total_bytes_recv = 0;
for (;;) {
bytes_recv = recv(sock, &buffer[total_bytes_recv], CHUNK_SIZE, 0);
if (bytes_recv <= 0) {
break;
} else {
total_bytes_recv += bytes_recv;
}
}
}
The main function:
int main(int argc, char **argv) {
http_client http;
char response[100000] = {0};
http.connect_to_host("go.com", 80);
http.send_request("GET / HTTP/1.1\r\n\r\n");
http.receive_response(response);
std::cout << response << std::endl;
return 0;
}
Thank you
You seem to expect the server to close the connection after the response is transmitted. A typical HTTP 1.1 server doesn't do that by default; they keep the connection open for further requests, unless the client explicitly asks otherwise via Connection: close header.
So, you receive all the data, and then the next recv call is sitting there, waiting for more data to arrive.
An HTTP 1.1 client is expected to detect the end of response via Content-Length header, or by decoding a chunked response as indicated by Transfer-Encoding: chunked header.
I have a server which uses a ZMQ_ROUTER to communicate with ZMQ_DEALER clients. I set the ZMQ_HEARTBEAT_IVL and ZMQ_HEARTBEAT_TTL options on the client socket to make the client and server ping pong each other. Beside, because of the ZMQ_HEARTBEAT_TTL option, the server will timeout the connection if it does not receive any pings from the client in a time period, according to zmq man page:
The ZMQ_HEARTBEAT_TTL option shall set the timeout on the remote peer for ZMTP heartbeats. If
this option is greater than 0, the remote side shall time out the connection if it does not
receive any more traffic within the TTL period. This option does not have any effect if
ZMQ_HEARTBEAT_IVL is not set or is 0. Internally, this value is rounded down to the nearest
decisecond, any value less than 100 will have no effect.
Therefore, what I expect the server to behave is that, when it does not receive any traffic from a client in a time period, it will close the connection to that client and discard all the messages in the outgoing queue after the linger time expires. I create a toy example to check if my hypothesis is correct and it turns out that it is not. The chain of events is as followed:
The server sends a bunch of data to the client.
The client receives and processes the data, which is slow.
All send commands return successfully.
While the client is still receiving the data, I unplug the internet cable.
After a few seconds (set by the ZMQ_HEARTBEAT_TTL option), the server starts sending FIN signals to the client, which are not being ACKed back.
The outgoing messages are not discarded (I check the memory consumption) even after a while. They are discarded only if I call zmq_close on the router socket.
So my question is, is this suppose to be how one should use the ZMQ heartbeat mechanism? If it is not then is there any solution for what I want to achieve? I figure that I can do heartbeat myself instead of using ZMQ's built in. However, even if I do, it seems that ZMQ does not provide a way to close a connection between a ZMQ_ROUTER and a ZMQ_DEALER, although that another version of ZMQ_ROUTER - ZMQ_STREAM provides a way to do this by sending an identity frame followed by an empty frame.
The toy example is below, any help would be thankful.
Server's side:
#include <zmq.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char **argv)
{
void *context = zmq_ctx_new();
void *router = zmq_socket(context, ZMQ_ROUTER);
int router_mandatory = 1;
zmq_setsockopt(router, ZMQ_ROUTER_MANDATORY, &router_mandatory, sizeof(router_mandatory));
int hwm = 0;
zmq_setsockopt(router, ZMQ_SNDHWM, &hwm, sizeof(hwm));
int linger = 3000;
zmq_setsockopt(router, ZMQ_LINGER, &linger, sizeof(linger));
char bind_addr[1024];
sprintf(bind_addr, "tcp://%s:%s", argv[1], argv[2]);
if (zmq_bind(router, bind_addr) == -1) {
perror("ERROR");
exit(1);
}
// Receive client identity (only 1)
zmq_msg_t identity;
zmq_msg_init(&identity);
zmq_msg_recv(&identity, router, 0);
zmq_msg_t dump;
zmq_msg_init(&dump);
zmq_msg_recv(&dump, router, 0);
printf("%s\n", (char *) zmq_msg_data(&dump)); // hello
zmq_msg_close(&dump);
char buff[1 << 16];
for (int i = 0; i < 50000; ++i) {
if (zmq_send(router, zmq_msg_data(&identity),
zmq_msg_size(&identity),
ZMQ_SNDMORE) == -1) {
perror("ERROR");
exit(1);
}
if (zmq_send(router, buff, 1 << 16, 0) == -1) {
perror("ERROR");
exit(1);
}
}
printf("OK IM DONE SENDING\n");
// All send commands have returned successfully
// While the client is still receiving data, I unplug the intenet cable on the client machine
// After a while, the server starts sending FIN signals
printf("SLEEP before closing\n"); // At this point, the messages are not discarded (memory usage is high).
getchar();
zmq_close(router);
zmq_ctx_destroy(context);
}
Client's side:
#include <zmq.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char **argv)
{
void *context = zmq_ctx_new();
void *dealer = zmq_socket(context, ZMQ_DEALER);
int heartbeat_ivl = 3000;
int heartbeat_timeout = 6000;
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_IVL, &heartbeat_ivl, sizeof(heartbeat_ivl));
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_TIMEOUT, &heartbeat_timeout, sizeof(heartbeat_timeout));
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_TTL, &heartbeat_timeout, sizeof(heartbeat_timeout));
int hwm = 0;
zmq_setsockopt(dealer, ZMQ_RCVHWM, &hwm, sizeof(hwm));
char connect_addr[1024];
sprintf(connect_addr, "tcp://%s:%s", argv[1], argv[2]);
zmq_connect(dealer, connect_addr);
zmq_send(dealer, "hello", 6, 0);
size_t size = 0;
int i = 0;
while (size < (1ll << 16) * 50000) {
zmq_msg_t msg;
zmq_msg_init(&msg);
if (zmq_msg_recv(&msg, dealer, 0) == -1) {
perror("ERROR");
exit(1);
}
size += zmq_msg_size(&msg);
printf("i = %d, size = %ld, total = %ld\n", i, zmq_msg_size(&msg), size); // This causes the cliet to be slow
// Somewhere in this loop I unplug the internet cable.
// The client starts sending FIN signals as well as trying to reconnect. The recv command hangs forever.
zmq_msg_close(&msg);
++i;
}
zmq_close(dealer);
zmq_ctx_destroy(context);
}
PS: I know that setting the highwater mark to unlimited is bad practice, however I figure that the problem will be the same even if the highwater mark is low so let's ignore it for now.
I am trying to send GET request to nodejs server from a C++ client.
nodejs server:
const server = http.createServer((request, response) => {
console.log(request.url);
response.end("received");
})
and here is my C++ clients:
while(getline(cin, random_input)) {
int s_len;
input = "GET / HTTP/1.1\r\n\r\n";
s_len = send(sock, input.c_str(), input.size(), 0);
if( s_len < 0)
{
perror("Send failed : ");
return false;
}
cout<<socket_c.receive(1024);
}
string tcp_client::receive(int size=512)
{
char buffer[size];
string reply;
int r_len; // received len
//Receive a reply from the server
r_len = recv(sock, buffer, sizeof(buffer), 0);
if( r_len < 0)
{
puts("recv failed");
}
if(buffer[r_len-1] == '\n') {
buffer[r_len-1] = '\0';
} else {
buffer[r_len] = '\0';
}
reply = buffer;
return reply;
}
so the C++ client can send GET requests each time when it's typing something in the terminal.
It works pretty fine if I type something right after the connection has been established. However, if I wait for 15-30 seconds after establish the connection, then type something on the client program, although the number of byte s_len that has been sent is correct, the server could't received anything.
May I know what goes wrong?
A few errors I spotted:
send return value is not checked correctly. Condition input.size() == s_len must be true.
recv return value is not checked of EOF. It treats r_len of 0 as valid data instead of disconnect. This may be the reason you do not see server replies: it may have disconnected but you did not notice that.
Setting the value of keepAliveTimeout of the node.js server to 0 could solve the problem
I have been trying to figure out this problem for over a month now. I have no where else to turn.
I have a server that listens to many multicast channels (100ish). Each socket is its own thread. Then I have a client listener (single threaded) that handles all incoming connections, disconnects, and client messaging within the same server. The idea is that a client comes in, connects, requests data from a multicast channels and I send the data back to the client. The client stays connected and I relay the UDP data back to the client. The client can either request UDP or TCP has the protocol for the data relay. At one point this was working beautifully for a couple of weeks. We did some code and kernel changes, and now we cant figure out whats gone wrong.
The server will run for hours and have hundreds of clients connected throughout the day. But at some point, randomly, the server will just stop. And by stop, I mean: all UDP sockets stop receiving/handling data (tcpdump shows data still coming to the box), the client_listener thread stops receiving client packets. BUT!!! the main client_listener socket can still receive new connections and new disconnects on the main socket. On a new connection, the main sockets is able to send a "Connection Established" packet back to the client, but then when the client responds, the select never returns.
I can post code if someone would like. If anyone has any suggestions where to look or if this sounds like something. Please let me know.
If you have any questions, please ask.
Thank you.
I would like to share my TCP Server code:
This is a single thread. Works fine for hours and then I will only receive "New Connections" and "Disconnects". NO CLIENT PACKETS WILL COME IN.
int opt = 1;
int addrlen;
int sd;
int max_sd;
int valread;
int activity;
int new_socket;
char buffer[MAX_BUFFER_SIZE];
int client_socket[m_max_clients];
struct sockaddr_in address;
fd_set readfds;
for(int i = 0; i<m_max_clients; i++)
{
client_socket[i]=0;
}
if((m_master_socket = socket(AF_INET,SOCK_STREAM,0))==0)
LOG(FATAL)<<"Unable to create master socket";
if(setsockopt(m_master_socket,SOL_SOCKET,SO_REUSEADDR,(char*)&opt,sizeof(opt))<0)
LOG(FATAL)<<"Unable to set master socket";
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(m_listenPort);
if(bind(m_master_socket,(struct sockaddr*)& address, sizeof(address))!=0)
LOG(FATAL)<<"Unable to bind master socket";
if(listen(m_master_socket,SOMAXCONN)!=0)
LOG(FATAL)<<"listen() failed with err";
addrlen = sizeof(address);
LOG(INFO)<<"Waiting for connections......";
while(true)
{
FD_ZERO(&readfds);
FD_SET(m_master_socket, &readfds);
max_sd = m_master_socket;
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(sd > 0)
FD_SET(sd, &readfds);
if(sd>max_sd)
max_sd = sd;
}
activity = select(max_sd+1,&readfds,NULL,NULL,NULL);
if((activity<0)&&(errno!=EINTR))
{
// int err = errno;
// LOG(ERROR)<<"SELECT ERROR:"<<activity<<" "<<err;
continue;
}
if(FD_ISSET(m_master_socket, &readfds))
{
if((new_socket = accept(m_master_socket,(struct sockaddr*)&address, (socklen_t*)&addrlen))<0)
LOG(FATAL)<<"ERROR:ACCEPT FAILED!";
LOG(INFO)<<"New Connection, socket fd is (" << new_socket << ") client_addr:" << inet_ntoa(address.sin_addr) << " Port:" << ntohs(address.sin_port);
for(int i =0;i<m_max_clients;i++)
{
if(client_socket[i]==0)
{
//try to set the socket to non blocking, tcp nagle and keep alive
if ( !SetSocketBlockingEnabled(new_socket, false) )
LOG(INFO)<<"UNABLE TO SET NON-BLOCK: ("<<new_socket<<")" ;
if ( !SetSocketNoDelay(new_socket,false) )
LOG(INFO)<<"UNABLE TO SET DELAY: ("<<new_socket<<")" ;
// if ( !SetSocketKeepAlive(new_socket,true) )
// LOG(INFO)<<"UNABLE TO SET KeepAlive: ("<<new_socket<<")" ;
ClientConnection* con = new ClientConnection(m_mocSrv, m_udpPortGenerator, inet_ntoa(address.sin_addr), ntohs(address.sin_port), new_socket);
if(con->login())
{
client_socket[i] = new_socket;
m_clientConnectionSocketMap[new_socket] = con;
LOG(INFO)<<"Client Connection Logon Complete";
}
else
delete con;
break;
}
}//for
}
else
{
try{
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(FD_ISSET(sd,&readfds))
{
if ( (valread = recv(sd, buffer, sizeof(buffer),MSG_DONTWAIT|MSG_NOSIGNAL)) <= 0 )
{
//remove from the fd listening set
LOG(INFO)<<"RESET CLIENT_SOCKET:("<<sd<<")";
client_socket[i]=0;
handleDisconnect(sd,true);
}
else
{
std::map<int, ClientConnection*>::iterator client_connection_socket_iter = m_clientConnectionSocketMap.find(sd);
if(client_connection_socket_iter != m_clientConnectionSocketMap.end())
{
client_connection_socket_iter->second->handle_message(buffer, valread);
if(client_connection_socket_iter->second->m_logoff)
{
LOG(INFO)<<"SOCKET LOGGED OFF:"<<sd;
client_socket[i]=0;
handleDisconnect(sd,true);
}
}
else
{
LOG(ERROR)<<"UNABLE TO FIND SOCKET DESCRIPTOR:"<<sd;
}
}
}
}
}catch(...)
{
LOG(ERROR)<<"EXCEPTION CATCH!!!";
}
}
}
From the information given I would state the following:
Do not use a thread for each connection. Since you're on Linux use EPOLL Edge Triggered Multiplexing. Most newer web frameworks use this technology. For more info check 10K Problem.
By eliminating threads from the equation you're eliminating the possibilities of a deadlock and reducing the complexity of debugging / worrying about thread safe variables.
Ensure each connection when finished is completely closed.
Ensure that you do not have some new firewall rules that popped up in iptables since the upgrade.
Check any firewalls on the network to see if they are restricting certain types of activity (is your server on a new IP since the upgrade?)
In short I would put my money on a thread deadlock and / or starvation. I've personally conducted experiments in which I've created a multithreaded server vs a single threaded server using Epoll. The results where night and day, Epoll blows away multithreaded implementation (for I/O) and makes the code simpler to write, debug and maintain.