HTTP over TCP/IP? - c++

I am trying to use CPP and SDL_Net to make a HTTP Client. I'm using a char [] buffer to send and receive information.
Basically, I connect to the site mentioned below on port 80:
strcpy(buffer,"GET / HTTP/1.0 \n host: nullfakenothing.freeriderwebhosting.com \n \n");
SDLNet_TCP_Send(sd, (void *)buffer, strlen(buffer)+1);
SDLNet_TCP_Recv(sd, (void *)buffer, 200)>0
But I can't get anything back (the program gets stuck on Recv). Am I using the protocol wrong or is there something against the whole TCP/HTML system?

Your HTTP protocol has spurious spaces and should have \r\n terminators. This is untested but the HTTP should be okay. You may want to add other headers.
char buffer[1024];
std::strcpy(buffer, "GET / HTTP/1.1\r\n");
std::strcat(buffer, "Host: nullfakenothing.freeriderwebhosting.com\r\n");
std::strcat(buffer, "\r\n");
SDLNet_TCP_Send(sd, (void*) buffer, strlen(buffer));
SDLNet_TCP_Recv(sd, (void*) buffer, sizeof(buffer));

What you send is not HTTP but instead something which looks a little bit like HTTP if you don't look to hard. Please make yourself comfortable with the specification (like RFC2616) or at least look at some packet dumps or working code to see what you need to do their exactly. Since lots of things are wrong it makes no sense to point out specific errors.

Related

C++ HTTP client hangs on read() call after GET request

std::string HTTPrequest = "GET /index.html HTTP/1.1\r\nHost: www.yahoo.com\r\nConnection: close\r\n\r\n";
write(socket, HTTPrequest.c_str(), sizeof(HTTPrequest));
char pageReceived[4096];
int bytesReceived = read(socket, pageReceived, 4096);
I've got an HTTP client program that I run from my terminal. I've also got a webserver program. Using the webserver as a test, I can verify that the socket creation and attachment works correctly.
I create the request as shown above, then write to the socket. Using print statements, I can see that the code moves beyond the write call. However, it hangs on the read call.
I can't figure out what's going on - my formatting looks correct on the request.
Any ideas? Everything seems to work perfectly fine when I connect to my webserver, but both www.yahoo.com and www.google.com cause a hang. I'm on Linux.
In C and C++, sizeof gives you the number of bytes required to hold a type, regardless of its contents. So you are not sending the full request, only sizeof(std::string) bytes. You want HTTPRequest.size() (which gives you the number of bytes the value stored in HTTPRequest takes), not sizeof(HTTPrequest) (which gives you the number of bytes a std::string itself requires).

Winsock: Echo server replying twice, when I just programmed it to do one send() call

I am doing a simple TCP server in C++ for Windows to echo incoming data. I have a problem with it. Before I explain my problem, I have to say that Winsock is properly set up and this problem happens with any IP address as the source.
The general behaviour when a connection is established is this:
In the loop that runs while connection still alive, it must echo data, and precede it with REPLY word.
To do that, I'm currently using two send() calls:
One call sending "REPLY " alone.
Another call just sending back received data.
But using Putty Client, I'm getting this:
REPLY data_echoed REPLY.
Why REPLY is sent after the last send call if it was the first??? I'll post a little code where the problem happens:
//Reply to client
message = "HELLO CLIENT!! READY TO ECHO.\n";
send(new_socket, message, strlen(message), 0);
///Enter into a loop until connection is finished.
printf("%s \n\n", "Incoming data goes here: ");
do{
///Clear buffer and receive data.
memset(buffer, 0, sizeof(buffer));
ret = recv(new_socket, buffer, sizeof(buffer), 0);
printf("%s", buffer);
///Send a REPLY WORD and the data echoed.
send(new_socket, "REPLY\r\n", 7, 0);
send(new_socket, buffer, sizeof(buffer), 0);
}while(ret != SOCKET_ERROR);
What is wrong with that? I'll remove the first call, and the double effect disappears. Why I can't do two send calls one after other?
You ignore the return value of recv until after you send REPLY, so no matter what happens, you send REPLY followed by the contents of the buffer. After you echo the first time, something happens, and no matter what it is, you send REPLY.
Bluntly, it doesn't look like you understand the very basics of TCP. We used to have a rule that before anyone can do anything with TCP, they must memorize and completely understand the following statement: "TCP is a byte-stream protocol that does not preserve application message boundaries."
Your code pretends that it is receiving and echoing application-level messages. But there is no code to actually implement application-level messages. TCP has no support for application-level messages, so if you want them, you have to implement them. You clearly want them. You also have not implemented them.
Do newline characters delimit messages? If so, where's the code to find them in the data stream? If not, what does?

Sending HTML tag to browser via socket connection with C++ Socket API

I am trying to make a simple http server with C++. I've followed the beej's guide of network programming in C++.
When I ran the server in some port (8080, 2127, etc.) it successfully send response to browser (Firefox) when it accessed via address bar with: localhost:PORT_NUMBER except in port 80.
This is the code i wrote:
printf("Server: Got connection from %s\n", this->client_ip);
if(!fork()) // This is the child process, fork() -> Copy and run process
{
close(this->server_socket); // Child doesn't need listener socket
// Try to send message to client
char message[] = "\r\nHTTP/1.1 \r\nContent-Type: text/html; charset=ISO-8859-4 \r\n<h1>Hello, client! Welcome to the Virtual Machine Web..</h1>";
int length = strlen(message); // Plus 1 for null terminator
int send_res = send(this->connection, message, length, 0); // Flag = 0
if(send_res == -1)
{
perror("send");
}
close(this->connection);
exit(0);
}
close(this->connection); // Parent doesn't need this;
The problem is, even I have added the header on very early of the response string, why does the browser not showing the HTML properly instead shows only plain text? It shows something like this:
Content-Type: text/html; charset=ISO-8859-4
<h1>Hello, client! Welcome to the Virtual Machine Web..</h1>
Not a big "Hello, client!.." string like a normally h1 tagged string. What is the problem? Am I missing something in the header?
Another question is, why is the server won't running in port 80? The error log in server says:
server: bind: Permission denied
server: bind: Permission denied
Server failed to bind
libc++abi.dylib: terminate called throwing an exception
Please help. Thank you. Edit: I'dont have any process on Port 80.
You need to terminate the HTTP response header with \r\n\r\n, rather than just \r\n. It should also start with something more like HTTP/1.1 200 OK\r\n, without the leading \r\n.
For your port problem, if you have nothing else running on the port in question, you may find that the socket created by the last run of your program is still sticking around. To work around this, you can use setsockopt to set the SO_REUSEADDR flag on the socket. (This is not recommended for general use, I believe because you may receive data not intended for your program, but for development it's extremely handy.)
Your request starts with \r\n when it shouldn't also it did not specify a status code and you need a blank line after all the headers.
char message[] = "HTTP/1.1 200 Okay\r\nContent-Type: text/html; charset=ISO-8859-4 \r\n\r\n<h1>Hello, client! Welcome to the Virtual Machine Web..</h1>";
As for your port 80 issue, some other application maybe bound to it.
you need to add "Content-length: ", and the length is your HTML code, just like this:
char msg[] = "HTTP/1.1 200 OK\r\nContent-Type: text/html\r\nContent-length: 20\r\n\r\n<h1>Hello World</h1>";

Broken HTML - browsers don't downloads whole HTTP response from my webserver, CURL does

Symptom
I think, I messed up something, because both Mozilla Firefox and Google Chrome produce the same error: they don't receive the whole response the webserver sends them. CURL never misses, the last line of the quick-scrolling response is always "</html>".
Reason
The reason is, that I send response in more part:
sendHeaders(); // is calls sendResponse with a fix header
sendResponse(html_opening_part);
for ( ...scan some data... ) {
sendResponse(the_data);
} // for
sendResponse(html_closing_part)
The browsers stop receiving data between sendResponse() calls. Also, the webserver does not close() the socket, just at the end.
(Why I'm doing this way: the program I write is designed for non-linux system, it will run on an embedded computer. It has not too much memory, which is mostly occupied by lwIP stack. So, avoid collecting the - relativelly - huge webpage, I send it in parts. Browsers like it, no broken HTML occurred as under Linux.)
Environment
The platform is GNU/Linux (Ubuntu 32-bit with 3.0 kernel). My small webserver sends the stuff back to the client standard way:
int sendResponse(char* data,int length) {
int x = send(fd,data,length,MSG_NOSIGNAL);
if (x == -1) {
perror("this message never printed, so there's no error \n");
if (errno == EPIPE) return 0;
if (errno == ECONNRESET) return 0;
... panic() ... (never happened) ...
} // if send()
} // sendResponse()
And here's the fixed header I am using:
sendResponse(
"HTTP/1.0 200 OK\n"
"Server: MyTinyWebServer\n"
"Content-Type: text/html; charset=UTF-8\n"
"Cache-Control: no-store, no-cache\n"
"Pragma: no-cache\n"
"Connection: close\n"
"\n"
);
Question
Is this normal? Do I have to send the whole response with a single send()? (Which I'm working on now, until a quick solution arrives.)
If you read RFC 2616, you'll see that you should be using CR+LF for the ends of lines.
Aside from that, open the browser developer tools to see the exact requests they are making. Use a tool like Netcat to duplicate the requests, then eliminate each header in turn until it starts working.
Gotcha!
As #Jim adviced, I've tried sending same headers with CURL, as Mozilla does: fail, broken pipe, etc. I've deleted half of headers: okay. I've added back one by one: fail. Deleted another half of headers: okay... So, there is error, only if header is too long. Bingo.
As I've said, there're very small amount of memory in the embedded device. So, I don't read the whole request header, only 256 bytes of them. I need only the GET params and "Host" header (even I don't need it really, just to perform redirects with the same "Host" instead of IP address).
So, if I don't recv() the whole request header, I can not send() back the whole response.
Thanks for your advices, dudes!

HTTP GET request problem

I am writing a simple downloader. I am trying to download jpg picture.
void accel::download(int threads){
char msg[] = "HEAD /logos/2011/cezanne11-hp.jpg HTTP/1.0\r\nConnection: close\r\n\r\n";
int back = send(socketC, (const char *)&msg, strlen(msg), 0);
char *buff = new char[500];
back = recv(socketC, buff, 500, 0);
cout << buff;
char *buff2 = new char[700];
char msg2[] = "GET /logos/2011/cezanne11-hp.jpg HTTP/1.0\r\nRange: bytes=0-400\r\nConnection: close\r\n\r\n";
back = send(socketC, (const char *)&msg2, strlen(msg2), 0);
back = recv(socketC, buff2, 700, 0);
cout << back;
}
TCP connection is already initialized and the first part of my code is working. It succesfuly sends HEAD message and receaves response. But when it tries to download the picture, the recv gets 0. What might be wrong?
Btw this is school project so I am not allowed to use some fancy libraries to perform this operation. This is full pictures address - http://www.google.com/logos/2011/cezanne11-hp.jpg
You don't recieve anything because you told the server that you didn't want to make a second request when you specified
Connection: close
In your HEAD request.
This tells the server that you're only going to make ONE request and not to bother waiting for a second.
Try changing your first request to a persistant 'keep-alive' connection.
"HEAD /logos/2011/cezanne11-hp.jpg HTTP/1.0\r\nConnection: keep-alive\r\n\r\n";
NOTE: If you don't want to server to go away you might want to change your second request to keep-alive too.
Generally speaking, HTTP closes the socket. (HTTP 1.1 has persistent (keep-alive) connections, though you seem to have asked the server to close the connection on you in your first command.)
So make sure that your socket is still open after your first receive; I'm willing to bet that it isn't.
Have you verified that the image is actually being sent to your client? Maybe you're not getting any response from the server.
Try using wireshark to inspect the actual network activity. That will let you see exactly what's being sent and received. It's possible you're not getting anything back from the server, or that there's an issue with your request that you might be able to spot in the actual network traffic.
After you follow through with what chrisaycock says you may want to add Host: to your requests. A lot of shared hosting around and IP only access is likely to start failing.