We have a legacy server that receive requests in 2 ways:
Http REST requests via Jetty
Raw TCP sockets
We have a new requirement to process a raw request (delivered via TCP) and delegate this request to be processed by Jetty internally as if it would arrived via regular HTTP request.
Is that possible?
If not I guess the naive way would be to create an HTTP client and post it to ourselves as soon as we get data on the TCP socket.
Related
I have IoT devices which are rfıd readers. This devices send stream data with HTTP Post message. I use Django to get data from these devices. While receiving this data, I can write a function to get data from post requests coming directly with Django and receive the data from the request in this function with Django REST Framework or I can listen with a TCP Server socket. Which of these two approaches is more suitable for retrieving data? Because when I use a TCP Server socket, Django REST Framework does not work for HTTP requests, only the TCP socket works.
So I am deploying a web service developed using gsoap using mod_gsoap. I wanted to set SOAP_IO_KEEPALIVE and SOAP_IO_CHUNK modes of the soap context object to accept chunked requests. How do I achieve this?
Or is there any other way to accept chunked requests? Right now the server responds as soon as it receives the first chunk without waiting for the rest.
The documentation says:
Warning
Do not use any of the SOAP_IO flags to initialize or set the
context, such as SOAP_IO_KEEPALIVE and SOAP_IO_CHUNK.
The Apache server controls the connection settings and HTTP payload parameters to send and receive HTTP requests. Data is received with ap_get_client_block, which de-chunks the content when chunked.
I am working on Client Server application. Client side uses Windows Networking API's to establish connection with server. There are many HTTP requests I am requesting,which can use persistent connection. However for one HTTP Request I have to send it through seperate TCP stream,how can I achieve this? Currently my HTTP request is using the already used TCP stream which is causing issue. I have control on client code,so is there any header I can include to make sure http request does not share the connection
WinHttpSetOption(handle, WINHTTP_OPTION_MAX_CONNS_PER_SERVER, &maxConnections,
sizeof(maxConnections));
I want to connect to a proxy server that only allows HTTP connections, to speak with the target server by HTTPS.
The proxy server documentation states that the only way to do that is by means of the HTTP Connect verb (they are planning to add direct HTTPS connections to the proxy server itself, but for the moment only HTTP connections are allowed).
In my C++ program, I successfully connected and worked with the target server using ssl_stream's during a couple of months, using boost::asio without boost::beast, but I want now to use a proxy using boost::beast to make things easier; so, I now how to work with boost::asio but I'm a boost::beast newbie (and I don't fully understand how SSL works either).
The think is that, in my understanding, when you use a ssl_stream, you encript the whole communication, however, what I need now is to insert the encrypted message within the CONNECT HTTP body, and I don't know how to do that.
I've readed that this has something to do with the lowest_layer/next_layer thing but I'm not sure.
Could anybody provide an example of a full read/write connection with a proxy-server? or at least further clarifications?
Declare a variable for the connection (ioc is the io_context)
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> stream{ioc};
Build a CONNECT HTTP request message (req) using Beast
Send the request to the proxy in plain-text (note next_layer())
boost::beast::http::write(stream.next_layer(), req);
Read the HTTP response from the proxy
If the response has OK status, the tunnel is established
Now perform the SSL handshake:
stream.handshake(boost::asio::ssl::stream_base::client);
At this point you can write HTTP requests to stream and read HTTP responses from stream using Beast as normal (do not use next_layer() again).
We have a C\S model program. And users use client to connect our server. But some company users surf the Internet via HTTP proxy Server(not SOCKS4 or SOCKS5 proxy server). In this case, we need provide a feature to set the client proxy server(just like some other software). If do so, we should package our original data to Http protocol. So I want to know:
Is the method OK? Or there are some other better method to solve the problem.
If do so, Can our server send data to client initiatively?
Do you know other released software which have the feature to set proxy server how to deal this problem?
That is not how HTTP proxies work. You do not have to re-package your existing data as HTTP. All you need to do is:
connect to the HTTP proxy port, and send it an HTTP CONNECT request specifying the host/IP and port to connect to, eg:
CONNECT hostname:port HTTP/1.0
User-agent: MyApp
If the proxy requires authentication, you can also provide a Proxy-authorization header containing the encoded credentials as needed, eg:
CONNECT hostname:port HTTP/1.0
User-agent: MyApp
Proxy-authorization: basic dGVzdDp0ZXN0
if the proxy accepts the request and is successful in connecting to the requested host, it will send back an HTTP 200 reply, eg:
HTTP/1.0 200 Connection established
Proxy-agent: ProxyApp/1.1
you can now send and receive your data as you were already doing before, and the proxy will pass the data as-is between the client the host in both directions. You do not have to change any code logic other than to establish the proxy connection.
See Tunneling TCP based protocols through Web proxy servers
for more details.
This process is similar to the way other proxy protocols work, like SOCKS. The client connects to the proxy, requests a connection to the server host, and then the client and server pass data back and forth as if the proxy were not present.