Request (GET) or (POST):
http://localhost:8080/images?name=iVBORw0KGgoAAAANSUhEUgAAAAUA%20AAAFC......
Response:
Status Code: 414 Request-URI Too Long
Connection: close
Content-Length: 0
How to increase the request size?
You have a Request URI that is over 8kb in size! Eeesh!
Request-URI limits exist because of various vulnerabilities and bugs found in browsers, proxies, and networking hardware.
While it is possible to increase the Request URI limit checks in Jetty, the values chosen for Jetty represent the current safe maximums in use by various http clients and intermediaries on the public internet.
WARNING: YOU DO NOT WANT TO DO THIS
This is inappropriate for:
A WebServer accessible from the Internet.
A WebServer accessed by browsers like Chrome, Firefox, Safari, MSIE, or Opera.
A WebServer accessed by a mobile device like Android, iOS, or Microsoft mobile.
A WebServer that has a proxy in front of it.
A client that uses a proxy to access the WebServer.
This is only useful for transactions limited between custom HTTP clients directly talking to a Jetty server.
Instructions for Jetty 9.2.6.v20141205
If you don't have a Jetty Base ${jetty.base} directory yet, create one, and initialize it.
[user]$ mkdir mybase
[user]$ cd mybase
[mybase]$ java -jar /path/to/jetty-distribution-9.2.6.v20141205/start.jar \
--add-to-start=http,deploy,webapp
Edit the ${jetty.base}/start.ini
And change (or add) the following property with your desired upper limit.
jetty.request.header.size=8192
And no, there is no way to disable this limit check.
For each increase you open yourself up to greater and greater issues.
Starting with some browsers (and eventually all browsers) not being send the request, let alone jetty receiving it.
Meanwhile the ability of many proxy servers to handle your request starts to fail, resulting in terminated and failed connections or requests. Sometimes even truncated requests to Jetty.
Also each increase exposes you to various vulnerabilities surrounding unchecked limits in headers, resulting in the ability of various groups in executing CPU and Memory based DOS attacks that require very little network traffic to perform.
The Correct Way to Fix This:
You really should switch to POST (or PUT) based request data, and not be sending that amount of data in the request headers of the HTTP protocol.
A possibly easier way is to set the requestHeaderSize in your yaml config file. Here is an extract from ours:
server:
applicationConnectors:
- type: http
maxRequestHeaderSize: 100KiB
Other possible settings are shown here
Related
We have a streaming endpoint where data streams through our api.domain.com service to our backend.domain.com service and then as chunks are received in backend.domain.com, we write those chunks to the database. In this way, we can ndjson a request into our servers and IT IS FAST, VERY FAST.
We were very very disappointed to find out the cloud-run firewalls for http1.1 at least (via curl) do NOT support streaming!!!! curl is doing http2 to google cloud run firewall and google is by default hitting our servers with http1.1(for some reason though I saw an option to start in http2 mode that we have not tried).
What I mean, by they don't support streaming is that google does not send our servers a request UNTIL the whole request is received by them!!!(ie. not just headers, it needs to receive the entire body....this makes things very slow as opposed to streaming straight through firewall 1, cloud run service 1, firewall 2, cloud run service 2, database.
I am wondering if google's cloud run firewall by chance supports http/2 streaming and actually sends the request headers instead of waiting for the entire body.
I realize google has body size limits.......AND I realize we respond to clients with 200OK before the entire body is received (ie. we stream back while a request is being streamed in) sooooo, I am totally ok with google killing the connection if size limits are exceeded.
So my second question in this post is if they do support streaming, what will they do when size is exceeded since I will have already responded with 2000k at that point.
In this post, my definition of streaming is 'true streaming'. You can stream a request into a system and that system can forward it to the next system and keep reading/forwarding and reading/forwarding rather than waiting for the whole request. The google cloud run firewall is NOT MY definition of streaming since it does not pass through chunks it receives! Our servers sends data as it receives it so if there are many hops, there is no impact thanks to webpieces webserver.
Unfortunately, Cloud Run doesn't support HTTP/2 end-to-end to the serving instance.
Server-side streaming is in ALPHA. Not sure if it helps solving your problem. If it does, please fill out the following form to opt in, thanks!
https://docs.google.com/forms/d/e/1FAIpQLSfjwvwFYFFd2yqnV3m0zCe7ua_d6eWiB3WSvIVk50W0O9_mvQ/viewform
I am experimenting with following setup.
Clone/copy (but not redirect) all incoming HTTP requests from port 80 to another port say 8080 on same machine. I have a simple NGINX + Lua based WAF which is listening on 8080. Essentially, I am running two instances of webservers here, one which is serving real requests and other one working on cloned traffic for detection purpose. I don't care about being able to block the malicious requests so I dont care about being inline.
I want to use WAF only for detection purpose i.e. it should analyze all incoming requests, raise alert and drop the request after that. This will not hamper anything from users point of view since port 80 is serving real requests.
How can I clone traffic this way and just discard it after analysis is done ? Is this feasible ? If yes, please suggest any tools which can clone traffic with minimal performance hit.
2.
Have a look : https://github.com/buger/gor
Example instructions are straightforward. Additional logging or certain forwards you could possibly add as well
In the current Nginx version, there is an ngx_http_mirror_module, which retranslates requests to another endpoint and ignores responses. See also this answer
I have an iPhone App that gets the data by SOAP requests.
The SOAP calls are done by sudzc.com library.
I have to make SOAP Request to two servers.
Server A: is my own server, where I retrieve some informations, SOAP Response written by myself
Server B: a third party server that gives me some necessary informations
iOS 6
The app is working 100% correct.
iOS 7
Server A: working perfectly
Server B: SOAP Requests randomly fails. I am getting the following
error message sometimes:
< SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
< SOAP-ENV:Header/>
< SOAP-ENV:Body>
< SOAP-ENV:Fault>
< faultcode>SOAP-ENV:Server
< faultstring xml:lang="en">Could not access envelope: Unable to create envelope from given source: ; nested exception is com.sun.xml.internal.messaging.saaj.SOAPExceptionImpl: Unable to create envelope from given source: : Unable to create envelope from given source: : org.xml.sax.SAXParseException: The markup in the document preceding the root element must be well-formed.: The markup in the document preceding the root element must be well-formed.
< /faultstring>
< detail/>
< /SOAP-ENV:Fault>
< /SOAP-ENV:Body>
< /SOAP-ENV:Envelope>
Anyone has an idea, why this only happens on iOS7 and how I could get rid of it?
UPDATE:
May it be related to the fact, that one server is running on https and the other runs on http?
Server A: http://www.xxx.xy
Server B: https://www.xxx.xy:443
I have written a support request to the iOS Team. here is what they replied... in Fact it has something to do with the htpps... Just in case someone runs into the same error, here might be the reason why:
I'm responding to your question as to why your SOAP requests are failing on iOS 7, but only when targeting one of your two servers. You wrote:
May it be related to the fact, that one server is running on https and
the other runs on http?
It turns out that your theory is correct. The problem here is that iOS 7 includes the recommended countermeasure for the BEAST attack.
http://en.wikipedia.org/wiki/Transport_Layer_Security#BEAST_attack
http://www.educatedguesswork.org/2011/11/rizzoduong_beast_countermeasur.html
https://www.imperialviolet.org/2012/01/15/beastfollowup.html
iOS applies this countermeasure when it talks to a TLS 1.0 or earlier server (TLS 1.1 and later include their own fix for this attack) that negotiates the use a block cypher (stream cyphers are not vulnerable to this attack).
Looking at a packet trace of the test app you sent me, I can see that it opens a TLS 1.2 connection to www.xxxx.xy:443. That server downgrades the connection to TLS 1.0 and then negotiates to use the AES-128 block cypher (SSL_RSA_WITH_AES_128_CBC_SHA). After that I can see the unmistakable signs of this countermeasure kicking in.
I confirmed my analysis by running your test app on the simulator (which accurately reproduces the problem) and then, using the debugger, setting the internal state of Secure Transport (the subsystem that implements TLS on iOS) to disable this countermeasure entirely. On doing that I found that the first tab in your app works fine.
A well-known side effect of this countermeasure is that it causes problem for poorly written HTTPS servers. That's because it breaks up the HTTP request (the one running over the TLS connection) into chunks, and the server must be coded to correctly receive these chunks and join them into a single HTTP request. Some servers don't do that properly, which results in a variety of interesting failures.
The best way to solve this problem is to fix the server. The server should be able to cope with receiving the HTTP message in chunks, detecting HTTP message boundaries in the manner prescribed by RFC 2616.
http://www.ietf.org/rfc/rfc2616.txt
If that fix is too hard to implement in the short term, you can work around the problem by simply upgrading your server to support TLS 1.2. This is a good idea anyway.
Another workaround, one that's a less good idea, is to tweak the server configuration to negotiate the use of a stream cypher.
If you don't control the server, I strongly encourage you to lobby the server operators for a server-side fix for this problem. iOS 7 is not the only client that's implementing this workaround; you'll find it in recent versions of both Chrome, Firefox and so on.
If a server-side fix is just not possible, your options on the client side are less than ideal:
o You could replace HTTPS with HTTP. Obviously this is not a good thing, and it also requires that the server support HTTP. Also, HTTP comes with its own array of un-fun problems (various middleboxes, especially those run by cellular carriers, like to monkey with HTTP messages).
o At the lowest level, you can disable this countermeasure for a given a Secure Transport context by way of the kSSLSessionOptionSendOneByteRecord flag (see ). This flag is not directly available to higher-level software, although it is possible to use it at the CFSocketStream layer (because that API gives you a way to get at the Secure Transport context it's using).
IMPORTANT: The high-level APIs, NSURLSession and NSURLConnection, don't give you access to the Secure Transport context, nor do they provide any control over this aspect of TLS. Thus, there's no way to disable this countermeasure and continue working with those nice, high-level APIs.
o You could implement your own HTTP layer on top of our TLS infrastructure (ideally CFSocketStream). Alas, this is a whole world of pain: HTTP is a lot more complex than you might think.
I'm sorry I don't have better news.
I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment.
I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences.
Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche.
My first reaction is "why can't they just add my server's addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days.
My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there.
Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me.
Thanks for reading my long-winded question,
Ripred
If your software can make an SSL connection on port 443, then you are 99% of the way there.
Typically HTTP proxies are set up to proxy SSL-on-443 (for the purposes of HTTPS). You just need to teach your software to use the HTTP proxy. Check the HTTP RFCs for the full details, but the Cliffs Notes version is:
Connect to the HTTP proxy on the proxy port;
Send to the proxy:
.
CONNECT your.real.server:443 HTTP/1.1\r\n
Host: your.real.server:443\r\n
User-Agent: YourSoftware/1.234\r\n
\r\n
Then parse the proxy response, which will start with a HTTP status code, followed by HTTP headers, followed by a blank line. You'll then be talking with your destination (if the status code indicated success, anyway), and can start talking SSL.
In many corporate environments you'll have to authenticate with the proxy - this is almost always HTTP Basic Authentication, which is pretty easy - again, see the RFCs.
Say are dealing with a Windows network that for internet access must pass through a firewall that you have no control over. Said firewall apparently blocks the known time protocols (NTP,daytime,etc) and you know from experience that those who control
it will not allow any exceptions.
Is it possible to sync this "Windows" (could be linux) computer via a web service call which grabs the time from a server out on the internet?
Is there another reliable method for updating time on the server, like pulling from a website and passing it to the ntp client?
HTTP Time Protocol:
http://www.vervest.org/fiki/bin/view/HTP/WebHome
It takes the date from the http server itself, not from a website served by it