Problem with libcurl cookie engine - libcurl

[Cross-posted from lib-curl mailing list]
I have a single threaded app (MSVC C++ 2005) build against a static
LIBCURL 7.19.4
A test application connects to an in house server & performs a bespoke
authentication process that includes posting a couple of forms, and
when this succeeds creates a new resource (POST) and then updates the
resource (PUT) using If-Match.
I only use a single connection to libcurl (i.e. only one CURL*)
The cookie engine is enabled from the start using
curl_easy_setopt(CURLOPT_COOKIEFILE, "")
The cookie cache is cleared at the end of the authentication process
using curl_easy_setopt(CURLOPT_COOKIELIST, "SESS"). This is required
by the authentication process.
The next call, which completes a successful authentication, results in
a couple of security cookies being returned from the server - they
have no expiry date set.
The server (and I) expect the security cookies to then be sent with
all subsequent requests to the server. The problem is that sometimes
they are sent and sometimes they aren't.
I'm not a CURL expert, so I'm probably doing something wrong, but I
can't figure out what. Running the test app in a loop results shows a
random distribution of correct cookie handling.
As a workaround I've disabled the cookie engine and am doing basic
manual cookie handling. Like this it works as expected, but I'd prefer
to use the library if possible.
Does anyone have any ideas?
Thanks
Seb

We've experienced issues with libcurl losing "session" when the headers are a particular size.
The two known cases we've seen are 1425 and 2885.
When the sent headers are this specific size the server doesn't appear to receive the proper cookies. We haven't actually tested against a controlled server to see what is actually received by the server.
The work-around we came up with was to alter the User-Agent slightly by adding a space at the end to change the header size.
Here is some code to predict the header size before the request is sent
size_t PredictHeaderOutSize(CURL *curl, bool doPost, const char* request, char* userAgent, const char* host, const char* form)
{
size_t predictedHeaderOutSize = 0;
// Note, so far predicting 1 byte per newline, fix the hard coded #'s below if that turns out to be wrong
// POST/GET line
predictedHeaderOutSize += (doPost ? 4 : 3); // POST vs GET
predictedHeaderOutSize += strlen(request);
predictedHeaderOutSize += 11; // Extra characters in 'POST <request> HTTP/1.1' not accounted for above
// User-Agent line
predictedHeaderOutSize += strlen(userAgent);
predictedHeaderOutSize += 13;
// Host: header
predictedHeaderOutSize += strlen(host);
predictedHeaderOutSize += 7;
// Accept: */*
predictedHeaderOutSize += 12;
// Cookie:
struct curl_slist *cookies=NULL;
struct curl_slist *next_cookie;
int num_cookies = 0;
CURLcode res = curl_easy_getinfo(curl, CURLINFO_COOKIELIST, &cookies);
if (res == CURLE_OK)
{
if (cookies != NULL)
{
// At least 1 cookie so add the extra space taken on cookie line
predictedHeaderOutSize += 7;
next_cookie = cookies;
num_cookies = 1;
while (next_cookie)
{
std::vector<std::string> cookie = QueueHelper::Split("\t", next_cookie->data, 7);
if (cookie.size() != 7)
{
// wtf?
}
else
{
// For each cookie we add length of key + value + 3 (for the = ; and extra space)
predictedHeaderOutSize += cookie[5].length() + cookie[6].length() + 3;
}
next_cookie = next_cookie->next;
num_cookies++;
}
curl_slist_free_all(cookies);
}
}
else
{
printf("curl_easy_getinfo failed: %s\n", curl_easy_strerror(res));
}
if (doPost)
{
// Content-Length:
size_t formLength = strlen(form);
if (formLength < 10)
predictedHeaderOutSize += 1;
if (formLength >= 10 && formLength < 100)
predictedHeaderOutSize += 2;
if (formLength >= 100 && formLength < 1000)
predictedHeaderOutSize += 3;
if (formLength >= 1000 && formLength < 10000)
predictedHeaderOutSize += 4;
predictedHeaderOutSize += 17;
// Content-Type: application/x-www-form-urlencoded
predictedHeaderOutSize += 48;
}
predictedHeaderOutSize += 2; // 2 newlines at the end? something else? not sure
return predictedHeaderOutSize;
}

Related

libmodsecurity as a library to process my own web requests

Is it possible to use libmodsecurity as a library and process requests on my own? I was messing with the examples in the repo ModSecurity examples, but I cant figure out how to make it take my request. I tried with simple_example_using_c.c but with no success. Is anyone have idea if this is possible?
#include <stdio.h>
#include <stdlib.h>
#include <modsecurity/modsecurity.h>
#include <modsecurity/rules_set.h>
char rulez[] ="basic_rules.conf";
const char *request = "" \
"GET /?test=test HTTP/\n" \
"Host: localhost:9999\n" \
"Content-Length: 27\n" \
"Content-Type: application/x-www-form-urlencoded\n";
int main(){
ModSecurity *modsec;
RulesSet *setRulez;
Transaction *transakcyja;
const char *error;
modsec = msc_init();
printf(msc_who_am_i(modsec));
msc_set_connector_info(modsec, "ModSecurity simple API");
setRulez = msc_create_rules_set();
int rulz = msc_rules_add_file(setRulez, rulez, &error);
if(rulz == -1){
fprintf(stderr, "huston rulez problem \n");
fprintf(stderr, "%s\n", error);
return -1;
}
msc_rules_dump(setRulez);
transakcyja = msc_new_transaction(modsec, setRulez, NULL);
if(transakcyja == NULL){
fprintf(stderr, "Init bad");
return -1;
}
msc_process_connection(transakcyja, "127.0.0.1", 9998, "127.0.0.1", 9999);
msc_process_uri(transakcyja, "http://127.0.0.1:9999/?k=test&test=test", "GET", "1.1");
msc_process_request_body(transakcyja);
msc_process_response_headers(transakcyja, 200, "HTTP 1.3");
msc_process_response_body(transakcyja);
msc_process_logging(transakcyja);
msc_rules_cleanup(setRulez);
msc_cleanup(modsec);
return 0;
}
Edit: I know something more now but, anyone know how to pass request to transaction? I know there is addRequestHeader() but it takes one header at the time, I can't really figure it out.
I think you have to understand how ModSecurity works.
There are five phases of all transaction:
parse request headers
parse request body
parse response headers
parse response body
make logs (and check transaction variables, eg. anomaly scores (in case of CRS))
(And plus the phase 0: process the connection itself.)
In the examples you can see couple of functions for each phases.
This is a common HTTP request:
POST /set.php HTTP/1.1
Host: foobar.com
User-Agent: something
Accept: */*
Content-Type: application/x-www-form-urlencoded
Content-Length: 7
a=1&b=2
Now if you already create a transaction object, you have to add the data of phases to that.
First see the status line, and add the necessary parts - consider your application already has the information, like client IP (std::string cip), port (int cport), and server IP (std::string dip), port (int dport). You also have the URI (std::string uri), method (std::string method) and the version of protocoll (std::string vers). You also need an object with type modsecurity::ModSecurityIntervention *it.
// phase 0
trans->processConnection(cip.c_str(), cport, dip.c_str(), dport);
trans->processURI(uri.c_str(), method.c_str(), vers.c_str());
trans->intervention(it);
Now you have to check the it variable, eg. it.status. For more information, check the source.
Now consider you have a variable (a list) which contains the parsed headers. Now you have to add these headers one by one to the transaction:
for(your_iterator it=headerType.begin(); it != headerType.end(); ++it) {
const std::string key = it->first.as<std::string>(); // key
const std::string val = it->second.as<std::string>(); // val
trans->addRequestHeader(key, val);
}
Now you can process the headers and check the results. Note, that if you process a phase, the engine evaluates all rules, where the phase values is relevant: 1, 2, 3, 4 or 5.
// phase 1
trans->processRequestHeaders();
trans->intervention(it);
In next steps, you have to add the request body and process it, then get the response headers and body (from the upstream), and repeat the steps above...
Hope now you can see how does it works.
I've made a utility, which runs the CRS test cases on libmodecurity3 while it uses CRS rules. The tool available here: ftwrunner.
Hope this helped.

range downloads in http

I need to download a html page in chunks. I had build a GET reuest whick can download a certain range of data. But i am unsuccessful in doing this in a repetitive manner.
Basically I have to reciver first 0-99 bytes then 100-199 and so on...
Also I would be grateful to know how toh know the exact size of receiving file beforehand using c or c++ code.
Following is my code.
i have exempted connectig to sockets etc. as it have been done successfully.
int c=0,s=0;
while(1)
{
get = build_get_query(host, page,s);
c+=1;
fprintf(stderr, "Query is:\n<<START>>\n%s<<END>>\n", get);
//Send the query to the server
int sent = 0;
cout<<"sending "<<c<<endl;
while(sent < strlen(get))
{
tmpres = send(sock, get+sent, strlen(get)-sent, 0);
if(tmpres == -1)
{
perror("Can't send query");
exit(1);
}
sent += tmpres;
}
//now it is time to receive the page
memset(buf, 0, sizeof(buf));
int htmlstart = 0;
char * htmlcontent;
cout<< "reciving "<<c<<endl;
while((tmpres = recv(sock, buf, BUFSIZ, 0)) > 0)
{
if(htmlstart == 0)
{
/* Under certain conditions this will not work.
* If the \r\n\r\n part is splitted into two messages
* it will fail to detect the beginning of HTML content
*/
htmlcontent = strstr(buf, "\r\n\r\n");
if(htmlcontent != NULL)
{
htmlstart = 1;
htmlcontent += 4;
}
}
else
{
htmlcontent = buf;
}
if(htmlstart)
{
fprintf(stdout, htmlcontent);
}
memset(buf, 0, tmpres);
}
if(tmpres < 0)
{
perror("Error receiving data");
}
s+=100;
if(c==5)
break;
}
char *build_get_query(char *host, char *page,int i)
{
char *query;
char *getpage = page;
int j=i+99;
char tpl[100] = "GET /%s HTTP/1.1\r\nHost: %s\r\nRange: bytes=%d-%d\r\nUser- Agent: %s\r\n\r\n";
if(getpage[0] == '/')
{
getpage = getpage + 1;
fprintf(stderr,"Removing leading \"/\", converting %s to %s\n", page, getpage);
}
query = (char *)malloc(strlen(host)+strlen(getpage)+8+strlen(USERAGENT)+strlen(tpl)-5);
sprintf(query, tpl, getpage, host, i , j, USERAGENT);
return query;
}
Also I would be grateful to know how toh know the exact size of receiving file beforehand using c or c++ code.
If the server supports a range request to the specific resource (which is not guaranteed) then the answer will look like this:
HTTP/1.1 206 partial content
Content-Range: bytes 100-199/12345
This means that the response will contain the bytes 100..199 and that the total size of the content is 12345 bytes.
There are lots of questions here which deal with parsing HTTP headers so I will not go into the detail on how to specifically use C/C++ to extract these data from the header.
Please note also that you are doing a HTTP/1.1 request and thus must deal with possible chunked responses and implicit keep alive. I really recommend to use existing HTTP libraries instead of doing it all by hand and doing it wrong. If you really want to implement it all by your own please study the specification of HTTP.

Boost::Asio HTTP Server extremely slow

I'm currently trying to create a http server using Boost.Asio, I made it like this HTTP Server 3.
Currently I just read the Request and always return an OK Message. So nothing special or time consuming.
The Problem I come across is, running the Server with 12 Threads (16 cores # 2.53GHz), the server handles arround 200-300 requests per second.
I did the same in C# using HttpListener, running with 12 Threads, it handles arround 5000-7000 requests.
What the heck is Boost.Asio doing?
Using Instrumentation Profiling with Visual Studio get following "Functions With Most Individual Work":
Name Exclusive Time %
GetQueuedCompletionStatus 44,46
std::_Lockit::_Lockit 14,54
std::_Container_base12::_Orphan_all 3,46
std::_Iterator_base12::~_Iterator_base12 2,06
Edit 1:
if (!err) {
//Add data to client request
if(client_request_.empty())
client_request_ = std::string(client_buffer_.data(), bytes_transferred);
else
client_request_ += std::string(client_buffer_.data(), bytes_transferred);
//Check if headers complete
client_headerEnd_ = client_request_.find("\r\n\r\n");
if(client_headerEnd_ == std::string::npos) {
//Headers not yet complete, read again
client_socket_.async_read_some(boost::asio::buffer(client_buffer_),
boost::bind(&session::handle_client_read_headers, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} else {
//Search Cookie
std::string::size_type loc=client_request_.find("Cookie");
if(loc != std::string::npos) {
//Found Cookie
std::string::size_type locend=client_request_.find_first_of("\r\n", loc);
if(locend != std::string::npos) {
std::string lCookie = client_request_.substr(loc, (locend-loc)); loc = lCookie.find(": "); if(loc != std::string::npos) {
std::string sCookies = lCookie.substr(loc+2);
std::vector<std::string> vCookies;
boost::split(vCookies, sCookies, boost::is_any_of(";"));
for (std::size_t i = 0; i < vCookies.size(); ++i) {
std::vector<std::string> vCookie;
boost::split(vCookie, vCookies[i], boost::is_any_of("="));
if(vCookie[0].compare("sessionid") == 0) {
if(vCookie.size() > 1) {
client_sessionid_ = vCookie[1];
break;
}
}
} }
} }
//Search Content-Length
loc=client_request_.find("Content-Length");
if(loc == std::string::npos) {
//No Content-Length, no Content? -> stop further reading
send_bad_request();
return;
}
else {
//Parse Content-Length, for further body reading
std::string::size_type locend=client_request_.find_first_of("\r\n", loc);
if(locend == std::string::npos) {
//Couldn't find header end, can't parse Content-Length -> stop further reading
send_bad_request();
return;
}
std::string lHeader = client_request_.substr(loc, (locend-loc));
loc = lHeader.find(": ");
if(loc == std::string::npos) {
//Couldn't find colon, can't parse Content-Length -> stop further reading
send_bad_request();
return;
}
//Save Content-Length
client_request_content_length_ = boost::lexical_cast<std::string::size_type>(lHeader.substr(loc+2));
//Check if already read complete body
if((client_request_.size()-(client_headerEnd_)) < client_request_content_length_) {
//Content-Length greater than current body, start reading.
client_socket_.async_read_some(boost::asio::buffer(client_buffer_),
boost::bind(&session::handle_client_read_body, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else {
//Body is complete, start handling
handle_request();
}
}
}
}
Edit 2:
Client used for testing is a simple C#-Application which starts 128-Threads each iterate 1000 times without any Sleep.
System.Net.HttpWebRequest req = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(BaseUrl);
req.Method = "POST";
byte[] buffer = Encoding.ASCII.GetBytes("{\"method\":\"User.Login\",\"params\":[]}");
req.GetRequestStream().Write(buffer, 0, buffer.Length);
req.GetRequestStream().Close();
The reason for the slowness probably is that Boost::Asio HTTP Server 3 example always closes the connection after each response, forcing the client to create a new connection for each request. Opening and closing connection on every request takes lots of time. Obviously, this could not outperform any server that supports HTTP/1.1 and Keep-alive (basically, doesn't close client connection and allows client to reuse it for subsequent requests).
Your C# server, System.Net.HttpListener, does support Keep-alive. The client, System.Net.HttpWebRequest, also has Keep-alive enabled by default. So, the connections are reused in this configuration.
Adding keep-alive to HTTP Server 3 example is straightforward:
inside connection::handle_read() check the request if client requested Keep-alive and store this flag within the connection
change connection::handle_write() so that it initiates graceful connection closure only when client doesn't support Keep-alive, otherwise just initiate async_read_some() like you already do in connection::start():
socket_.async_read_some(boost::asio::buffer(buffer_),
strand_.wrap(
boost::bind(&connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)));
And don't forget to clear your request/reply and reset the request_parser before calling async_read_some().
it seems that client_request_.find("\r\n\r\n"); is called repeatedly -- hunting for the end tokens from the beginning of the string each loop. use a starting position position. such as client_request_.find("\r\n\r\n", lastposition); (using bytes_transferred)
its possible to use asycn_read_until( ,"\r\n\r\n"); found here
or async_read which should read all (instead of some).
About HTTP server 3 example. Look at the request_parser source code. The methods parse/consume. It is really not optimial cus it getting data from buffer byte-by-byte and working with each byte; pushing into std::string using push_back and so on. Its just an example.
Also, if you are using asio::strand notice that it uses a mutex t lock "strand implementation". For HTTP server its easily possible to remove asio::strand at all, so i recomment to do this. If you want to stay with strands - to avoid delays on locking you can set those defines at compile time:
-DBOOST_ASIO_STRAND_IMPLEMENTATIONS=30000 -DBOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION

Receiving only necessary data with C++ Socket

I'm just trying to get the contents of a page with their headers...but it seems that my buffer of size 1024 is either too large or too small for the last packet of information coming through...I don't want to get too much or too little, if that makes sense. Here's my code. It's printing out the page just fine with all the information, but I want to ensure that it's correct.
//Build HTTP Get Request
std::stringstream ss;
ss << "GET " << url << " HTTP/1.0\r\nHost: " << strHostName << "\r\n\r\n";
std::string req = ss.str();
// Send Request
send(hSocket, req.c_str(), strlen(req.c_str()), 0);
// Read from socket into buffer.
do
{
nReadAmount = read(hSocket, pBuffer, sizeof pBuffer);
printf("%s", pBuffer);
}
while(nReadAmount != 0);
nReadAmount = read(hSocket, pBuffer, sizeof pBuffer);
printf("%s", pBuffer);
This is broken. You can only use the %s format specifier for a C-style (zero-terminated) string. How is printf supposed to know how many bytes to print? That information is in nReadAmount, but you don't use it.
Also, you call printf even if read fails.
The simplest fix:
do
{
nReadAmount = read(hSocket, pBuffer, (sizeof pBuffer) - 1);
if (nReadAmount <= 0)
break;
pBuffer[nReadAmount] = 0;
printf("%s", pBuffer);
} while(1);
The correct way to read an HTTP reply is to read until you have received a full LF-delimited line (some servers use bare LF even though the official spec says to use CRLF), which contains the response code and version, then keep reading LF-delimited lines, which are the headers, until you encounter a 0-length line, indicating the end of the headers, then you have to analyze the headers to figure out how the remaining data is encoded so you know the proper way to read it and know how it is terminated. There are several different possibilities, refer to RFC 2616 Section 4.4 for the actual rules.
In other words, your code needs to use this kind of structure instead (pseudo code):
// Send Request
send(hSocket, req.c_str(), req.length(), 0);
// Read Response
std::string line = ReadALineFromSocket(hSocket);
int rescode = ExtractResponseCode(line);
std::vector<std::string> headers;
do
{
line = ReadALineFromSocket(hSocket);
if (line.length() == 0) break;
headers.push_back(line);
}
while (true);
if (
((rescode / 100) != 1) &&
(rescode != 204) &&
(rescode != 304) &&
(request is not "HEAD")
)
{
if ((headers has "Transfer-Encoding") && (Transfer-Encoding != "identity"))
{
// read chunks until a 0-length chunk is encountered.
// refer to RFC 2616 Section 3.6 for the format of the chunks...
}
else if (headers has "Content-Length")
{
// read how many bytes the Content-Length header says...
}
else if ((headers has "Content-Type") && (Content-Type == "multipart/byteranges"))
{
// read until the terminating MIME boundary specified by Content-Type is encountered...
}
else
{
// read until the socket is disconnected...
}
}

Losing characters in TCP Telnet transmission

I'm using Winsock to send commands through Telnet ; but for some reason when I try to send a string, a few characters get dropped occasionally. I use send:
int SendData(const string & text)
{
send(hSocket,text.c_str(),static_cast<int>(text.size()),0);
Sleep(100);
send(hSocket,"\r",1,0);
Sleep(100);
return 0;
}
Any suggestions?
Update:
I checked and the error still occurs even if all the characters are sent. So I decided to change the Send function so that it sends individual characters and checks if they have been sent:
void SafeSend(const string &text)
{
char char_text[1];
for(size_t i = 0; i <text.size(); ++i)
{
char_text[0] = text[i];
while(send(hSocket,char_text,1,0) != 1);
}
}
Also, it drops characters in a peculiar way ; i.e. in the middle of the sentence. E.g.
set variable [fp]exit_flag = true
is sent as
ariable [fp]exit_flag = true
Or
set variable [fp]app_flag = true
is sent as
setrable [fp]app_flag = true
As mentioned in the comments you absolutely need to check the return value of send as it can return after sending only a part of your buffer.
You nearly always want to call send in a loop similar to the following (not tested as I don't have a Windows development environment available at the moment):
bool SendString(const std::string& text) {
int remaining = text.length();
const char* buf = text.data();
while (remaining > 0) {
int sent = send(hSocket, buf, remaining, 0);
if (sent == SOCKET_ERROR) {
/* Error occurred check WSAGetLastError() */
return false;
}
remaining -= sent;
buf += sent;
}
return true;
}
Update:
This is not relevant for the OP, but calls to recv should also structured in the same way as above.
To debug the problem further, Wireshark (or equivalent software) is excellent in tracking down the source of the problem.
Filter the packets you want to look at (it has lots of options) and check if they include what you think they include.
Also note that telnet is a protocol with numerous RFCs. Most of the time you can get away with just sending raw text, but it's not really guaranteed to work.
You mention that the windows telnet client sends different bytes from you, capture a minimal sequence from both clients and compare them. Use the RFCs to figure out what the other client does different and why. You can use "View -> Packet Bytes" to bring up the data of the packet and can easily inspect and copy/paste the hex dump.