I'm trying to write a small c++ webserver which handles GET, POST, HEAD requests. My problem is I don't know how to parse the headers, message body, etc. It's listening on the socket, I can even write stuff out to the browser just fine, but I'm curious how should I do this in c++.
Afaik a standard GET/POST request should look something like this:
GET /index HTTP/1.1
Host: 192.168.0.199:80
Connection: keep-alive
Accept: */*
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.97 Safari/537.22
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
this is the message body
All lines ended with '\r\n'.
Should I just split the request at '\n' and trim them (and if so how)?
Also how to handle files in post data?
Main thing I want to achieve is to get a vector containing headers key=>value pairs, a string with request method, the post data (like in PHP, if it's present), and the query string (/index for example) as string or vector splitted by '/'.
Thanks!
Before doing everything yourself, I introduce you Poco:
class MyHTTPRequestHandler : public HTTPRequestHandler
{
public:
virtual void handleRequest(HTTPServerRequest & request,
HTTPServerResponse & response) {
// Write your HTML response in res object
}
};
class MyRequestHandlerFactory : public HTTPRequestHandlerFactory
{
MyHTTPRequestHandler handler;
public:
MyRequestHandlerFactory(){}
HTTPRequestHandler* createRequestHandler(const HTTPServerRequest& request)
{
const string method = request.getMethod();
if (method == "get" || method == "post")
return &handler;
return 0;
}
};
int main()
{
HTTPServerParams params;
params.setMaxQueued(100);
params.setMaxThreads(16);
ServerSocket svs(80);
MyRequestHandlerFactory factory;
HTTPServer srv(&factory, svs, ¶ms);
srv.start();
while (true)
Thread::sleep(1000);
}
Boost.Asio is a good library but relativel low-level. You’ll really want to use a higher level library. There’s a modern C++ library called node.native which you should check out. A very server can be implemented as follows:
#include <iostream>
#include <native/native.h>
using namespace native::http;
int main() {
http server;
if(!server.listen("0.0.0.0", 8080, [](request& req, response& res) {
res.set_status(200);
res.set_header("Content-Type", "text/plain");
res.end("C++ FTW\n");
})) return 1; // Failed to run server.
std::cout << "Server running at http://0.0.0.0:8080/" << std::endl;
return native::run();
}
It doesn’t get much simpler than this.
Yes, this is basically just string parsing and then the response creating that goes back to the browser, following the specification.
But if this is not just an hobby project and you do not want to take on a really big task you should use either Apache, if you need a web server you need to extend, or tntnet, if you need a c++ web framework, or cpp-netlib if you need C++ network stuff.
You may want to consider Proxygen, Facebook's C++ HTTP framework. It's open source under a BSD license.
Related
I am working with gRPC and Protobuf, using a C++ server and a C++ client, as well as a grpc-js client. Is there a way to get a read on all of the HTTP request/response headers from the transport layer in gRPC? I am looking for the sort of typical client/server HTTP headers - particularly, I would like to see what version of the protocol is being used (whether it is HTTP1.1/2). I know that gRPC is supposed to be using HTTP2, but I am trying to confirm it at a low level.
In a typical gRPC client implementation you have something like this:
class PingPongClient {
public:
PingPongClient(std::shared_ptr<Channel> channel)
: stub_(PingPong::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back
// from the server.
PingPongReply PingPong(PingPongRequest request) {
// Container for the data we expect from the server.
PingPongReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
ClientContext context;
// The actual RPC.
Status status = stub_->Ping(&context, request, &reply);
// Act upon its status.
if (status.ok()) {
return reply;
} else {
auto errorMsg = status.error_code() + ": " + status.error_message();
std::cout << errorMsg << std::endl;
throw std::runtime_error(errorMsg);
}
}
private:
std::unique_ptr<PingPong::Stub> stub_;
};
and on the serverside, something like:
class PingPongServiceImpl final : public PingPong::Service {
Status Ping(
ServerContext* context,
const PingPongRequest* request,
PingPongReply* reply
) override {
std::cout << "PingPong" << std::endl;
printContextClientMetadata(context->client_metadata());
if (request->input_msg() == "hello") {
reply->set_output_msg("world");
} else {
reply->set_output_msg("I can't pong unless you ping me 'hello'!");
}
std::cout << "Replying with " << reply->output_msg() << std::endl;
return Status::OK;
}
};
I would think that either ServerContext or the request object might have access to this information, but context seems to only provide an interface into metadata, which is custom.
None of the gRPC C++ examples give any indication that there is such an API, nor do any of the associated source/header files in the gRPC source code. I have exhausted my options here in terms of tutorials, blog posts, videos, and documentation - I asked a similar question on the grpc-io forum, but have gotten no takers. Hoping the SO crew has some insights here!
I should also note that I experimented with passing a variety of environment variables as flags to the running processes to see if I can get details about HTTP headers, but even with these flags enabled (the HTTP-related ones), I do not see basic HTTP headers.
First, the gRPC libraries absolutely do use HTTP/2. The protocol is explicitly defined in terms of HTTP/2.
The gRPC libraries do not directly expose the raw HTTP headers to the application. However, they do have trace logging options that can log a variety of information for debugging purposes, including headers. The tracers can be enabled by setting the environment variable GRPC_TRACE. The environment variable GRPC_VERBOSITY=DEBUG should also be set to make sure that all of the logs are output. More information can be found in this document describing how the library uses envinronment variables.
In the C++ library, the http tracer should log the raw headers. The grpc-js library has different internals and different tracer definitions, so you should use the call_stream tracer for that one. Those will also log other request information, but it should be pretty easy to pick out the headers.
I have python code that sends a POST request and gets a json, I need to rewrite it in C ++ (Windows 10, Visual Studio 2019). I don’t understand what tools can really do everything I need without complicating the code.
There will be a console application that must send a request to send or receive data, more precisely a video stream.
I read about Boost.Asio, but it seems to work only with sockets, is there any way without them? At first I wanted to use it, as the most famous. I read about сurl, but it hasn't been updated for a long time, is it still relevant?
headers_predict = {
"Content-type": "application/json;charset=UTF-8",
"Accept": "application/json",
"X-Session-ID": session_id
}
data_predict = {
"audio": {
"data": sound_base64,
"mime": "audio/pcm16"
},
"package_id": ""
}
url = 'https://cp.speechpro.com/recognize'
r = requests.post(url, headers=headers_predict,
data=json.dumps(data_predict))
print('Response: %s' % r.text)
I wouldn't want to use sockets, because I don't understand them.
I need to be able to set the header and data as a json.
sockets, is there any way without them?
Technically, HTTP does not specify the underlying transport protocol and it can work with any sort of streaming transport. You could for example write the request into a file.
But, if you currently use TCP and don't want to change that, then you must use sockets. You don't need to interact with them directly if you use an existing HTTP client library.
I'm working with STM32-microcontroller and C-languege and want to send to and receive the data from my website. I can receive the .txt file with the "GET" method from website via this code:
static const char http_request[] = "GET "WEBSITE_SUB_ADDRESS" HTTP/1.1\r\nHost: "WEBSITE_ADDRESS"\r\n\r\n";
net_sock_send(socket, (uint8_t *) http_request, len);
net_sock_recv(socket, (uint8_t *) buffer + read, NET_BUF_SIZE - read);
Now I want to send or upload the data to the website in a file with http-method (POST or PUT, ...). How can I do it?
You first need to decide if you want to use POST or PUT.
The PUT method completely replaces whatever currently exists at the target URL with something else. With this method, you can create a new resource or overwrite an existing one given you know the exact Request-URI. ...In short, the PUT method is used to create or overwrite a resource at a particular URL that is known by the client.
The HTTP POST method is used to send user-generated data to the web server. For example, a POST method is used when a user comments on a forum or if they upload a profile picture. A POST method should also be used if you do not know the specific URL of where your newly created resource should reside. ...In short, the POST method should be used to create a subordinate (or child) of the resource identified by the Request-URI.
from https://www.keycdn.com/support/put-vs-post
While connected to the server you would send a HTTP header just as you have done with the GET request that would look somthing like this:
POST /test HTTP/1.1\r\n
Host: www.myServer.com\r\n
Content-Type: text/plain\r\n
Content-Lenght: 8\r\n
Accept: */*\r\n
\r\n
someData
You might also want to check if the server recived the message by looking at the header that is sent back to you, it should include HTTP/1.1 200 OK.
Edit: to get it into a file try /test/mytext.txt but i dont have a way of testing if this works
A good place to test the request is Post Test Server V2. hope this helps
#Flynn Harrison
I tested your method as follows:
static const char http_request[] = "POST "SUB_ADDRESS" HTTP/1.1\r\n"
"Host: "HOST_ADDRESS"\r\n\r\n"
"Content-Type: text/plain\r\n"
"Content-Lenght: 13\r\n"
"Accept: */*\r\n"
"\r\n"
"Data for Write Test";
and then:
net_sock_setopt(socket, "tls_server_name", (uint8_t*)HOST_ADDRESS, sizeof(HOST_ADDRESS));
net_sock_open(socket, HOST_ADDRESS, TIME_SOURCE_HTTP_PORT, 0);
net_sock_send(socket, (uint8_t *) http_request, len);
net_sock_recv(socket, (uint8_t *) buffer + read, NET_BUF_SIZE - read);
When I tried "/test.txt" at SUB_ADDRESS, I get the HTTP / 1.1 200 OK message but immediately after receiving the file contents, in the same buffer, I receive the HTTP / 1.1 400 Bad Request message and I do not see any changes to the file. My response from server is as follows:
HTTP/1.1 200 OK
.
.
.
This is a Test.... (Text-File Content)
HTTP/1.1 400 Bad Request\r\nDate: Fri, 09 Aug 2019 09:03:56
.
.
.
In the site you mentioned, the POST method works well but its mechanism is not clear which I can use. I tried to test the POST method with this site and my device but got error "411 Length Required".
I cannot seem to find any way to set a cookie programatically using WebEngine / WebView in JavaFX. The API doesn't give any idea as to how to obtain an HttpRequest-like object to modify the headers (which is what I use in the app for XML-RPC), or any sort of cookie manager.
No questions on this page seem to touch on the issue either - there is this but it just disables cookies when in applet to fix a bug, my app is on desktop btw.
The only way I image I could do it is by requesting the first page (which requires a cookie with a sessionID to load properly), getting an "access denied"-style message, executing some javascript in the page context which sets the cookie and then refreshing. This solution would be a horrible user experience though.
How do I set a cookie using WebEngine?
Update: Taking a clue from a question linked above, I tried digging around for some examples of using CookieManager and related APIs. I found this code, which I then tried to incorporate into my app, with weird results;
MyCookieStore cookie_store = new MyCookieStore();
CookieManager cookie_manager = new CookieManager(cookie_store, new MyCookiePolicy());
CookieHandler.setDefault(cookie_manager);
WebView wv = new WebView();
Now lets say we do this:
String url = "http://www.google.com/";
wv.getEngine.go(url);
Debugging in Eclipse after this request has been made shows that the cookie store map holds a cookie:
{http://www.google.com/=[NID=67=XWOQNK5VeRGEIEovNQhKsQZ5-laDaFXkzHci_uEI_UrFFkq_1d6kC-4Xg7SLSB8ZZVDjTUqJC_ot8vaVfX4ZllJ2SHEYaPnXmbq8NZVotgoQ372eU8NCIa_7X7uGl8GS, PREF=ID=6505d5000db18c8c:FF=0:TM=1358526181:LM=1358526181:S=Nzb5yzBzXiKPLk48]}
THAT IS AWESOME
WebEngine simply uses the underlying registered cookie engine! But wait, is it really? Lets try adding a cookie, prior to making the request...
cookie_store.add(new URL(url).toURI(), new HttpCookie("testCookieKey", "testCookieValue"));
Then I look at the request in Wireshark...
GET / HTTP/1.1
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/535.14 (KHTML, like Gecko) JavaFX/2.2 Safari/535.14
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cache-Control: no-cache
Pragma: no-cache
Host: www.google.com
Connection: keep-alive
No cookie for me :(
What am I doing wrong?
I have managed to solve this issue with the help of Vasiliy Baranov from Oracle. Vasiliy wrote to me:
Try putting the cookie into java.net.CookieHandler.getDefault() after
the WebView is instantiated for the first time and before the call to
WebEngine.load, e.g. as follows:
WebView webView = new WebView();
URI uri = URI.create("http://mysite.com");
Map<String, List<String>> headers = new LinkedHashMap<String, List<String>>();
headers.put("Set-Cookie", Arrays.asList("name=value"));
java.net.CookieHandler.getDefault().put(uri, headers);
webView.getEngine().load("http://mysite.com");
This will place the cookie into the store permanently, it should be sent out on every subsequent request (presumably provided that the server doesn't unset it).
Vasiliy also explained that WebView will install it's own implementation of the CookieHandler, while retaining cookies put into the default one.
Lastly, he mentions something quite intriguing:
Do not waste your time trying to use java.net.CookieManager, and
java.net.CookieStore. They are likely to cause problems with many
sites because they implement the wrong standard.
I tried googling after this but it doesn't seem to be common knowledge. If anyone is able to provide more details I would be grateful. It seems weird, since it seems CookieStore and CookieManager are used by a lot of software out there.
Solution for java.net.CookieManager
Cookies serialization:
List<HttpCookie> httpCookies = cookieManager.getCookieStore().getCookies();
Gson gson = new GsonBuilder().create();
String jsonCookie = gson.toJson(httpCookies);
Cookies deserialization:
Gson gson = new GsonBuilder().create();
List<HttpCookie> httpCookies = new ArrayList<>();
Type type = new TypeToken<List<HttpCookie>>() {}.getType();
httpCookies = gson.fromJson(json, type); // convert json string to list
for (HttpCookie cookie : httpCookies) {
cookieManager.getCookieStore().add(URI.create(cookie.getDomain()), cookie);
}
In my desktop application I added access to various internet resources using boost::asio. All i do is sending http requests (i.e to map tile servers) and read the results.
My code is based on the asio sync_client sample.
Now i get reports from customers who are unable to use these functions as they are running a proxy in their company. In a web browser they can enter the address of their proxy and everything is fine. Our application is unable to download data.
How can i add such support to my application?
I found the answer myself. It's quite simple:
http://www.jmarshall.com/easy/http/#proxies
gives quite a brief and clear description how http proxies work.
All i had to do is add the following code to the asio sync_client sample sample :
std::string myProxyServer = ...;
int myProxyPort = ...;
void doDownLoad(const std::string &in_server, const std::string &in_path, std::ostream &outstream)
{
std::string server = in_server;
std::string path = in_path;
char serice_port[255];
strcpy(serice_port, "http");
if(! myProxyServer.empty())
{
path = "http://" + in_server + in_path;
server = myProxyServer;
if(myProxyPort != 0)
sprintf(serice_port, "%d", myProxyPort);
}
tcp::resolver resolver(io_service);
tcp::resolver::query query(server, serice_port);
...
It seems that sample is merely a show-off of what Boost ASIO can be used for but is likely not intended to be used as-is. You should probably use a complete library that handles not only HTTP proxies, but also HTTP redirects, compression, and so on.
HTTP is a complex thing: without doing so, chances are high that you will get news from another client soon with another problem.
I found cppnetlib which looks promising and is based on Boost ASIO not sure it handles proxies though.
There is also libcurl but I don't know if it can easily be integrated with Boost ASIO.