Mod Security response/request body size? - mod-security

How do I get the size of the response/request body in Mod Security?
I keep getting this error for example:
[Mon Sep 17 23:34:38 2012] [error] [client 192.168.1.1] ModSecurity: Output filter: Response body too large (over limit of 1000, total not specified). [hostname "example.com"] [uri "/index.php"] [unique_id "asdf"]
It's not telling me the total; how can I figure the total out?

Take a look at the SecResponseBodyLimit docs:
SecResponseBodyLimit
Description: Configures the maximum response body size that will be accepted for buffering.
Syntax: SecResponseBodyLimit NUMBER_IN_BYTES
Example Usage: SecResponseBodyLimit 524228
Processing Phase: N/A
Scope: Any
Dependencies/Notes: Anything over this limit will be rejected with status code 500 Internal Server Error. This setting will not affect the responses with MIME types that are not marked for buffering. There is a hard limit of 1 GB.
By default this limit is configured to 512 KB:
# Buffer response bodies of up to 512 KB in length
SecResponseBodyLimit 524288
For some reason, you have it set to "1000" and /index.php's output is larger than 1000 bytes.

Related

Why maxRequestPerConnection of istio does effect to http/1.1 requests?

I'm just learning service mesh using istio and I found a strange behavior.
To understand maxRequestsPerConnection of Istio DestinationRule CRD, I write the below manifest and apply it.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
And then, I sent requests using fortio. The result is below:
yunoMacBook-Air:labo8 yu$ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 5 -qps 0 -n 1000 -loglevel Error http://httpbin:8000/get
07:12:01 I logger.go:127> Log level is now 4 Error (was 2 Info)
Fortio 1.11.3 running at 0 queries per second, 2->2 procs, for 1000 calls: http://httpbin:8000/get
Aggregated Function Time : count 1000 avg 0.0036879818 +/- 0.004588 min 0.000379697 max 0.034176044 sum 3.68798183
# target 50% 0.00234783
# target 75% 0.0032551
# target 90% 0.008
# target 99% 0.025
# target 99.9% 0.032784
Sockets used: 876 (for perfect keepalive, would be 5)
Jitter: false
Code 200 : 126 (12.6 %)
Code 503 : 874 (87.4 %)
All done 1000 calls (plus 0 warmup) 3.688 ms avg, 1170.1 qps
yunoMacBook-Air:labo8 yu$
After that, I changed maxRequestsPerConnection value to 10:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
(...)
maxRequestsPerConnection: 10
and I sent requests again with the same settings.
yunoMacBook-Air:labo8 yu$ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 5 -qps 0 -n 1000 -loglevel Error http://httpbin:8000/get
07:11:07 I logger.go:127> Log level is now 4 Error (was 2 Info)
Fortio 1.11.3 running at 0 queries per second, 2->2 procs, for 1000 calls: http://httpbin:8000/get
Aggregated Function Time : count 1000 avg 0.0039736575 +/- 0.004068 min 0.000404827 max 0.030141552 sum 3.97365754
# target 50% 0.00231923
# target 75% 0.00475
# target 90% 0.0104667
# target 99% 0.0192
# target 99.9% 0.025
Sockets used: 723 (for perfect keepalive, would be 5)
Jitter: false
Code 200 : 281 (28.1 %)
Code 503 : 719 (71.9 %)
All done 1000 calls (plus 0 warmup) 3.974 ms avg, 1098.3 qps
yunoMacBook-Air:labo8 yu$
200 rate increased and I cannot understand why it happened.
In my understanding, fortio uses http/1.1 and only one HTTP request is in one TCP connection when I use http/1.1. So I expected that I get the same results.
Could you tell me why this happened?
First things first: HTTP/1.1 does allow multiple request per connection with Keep-Alive header. This is the default behavior (RFC 2616, Section 8.1).
The documentation is a bit unclear.
maxRequestsPerConnection description states:
Maximum number of requests per connection to a backend. Setting this parameter to 1 disables keep alive. Default 0, meaning “unlimited”, up to 2^29.
Setting maxRequestsPerConnection to 1 disables Keep-Alive. Setting it to any other value (value > 1) switches Keep-Alive back on.
Setting this field to proper value (not too high, not too low) is the hard part of configuring Istio, and is dependent on your application needs and traffic.

ExportDicomData request of Google Cloud Healthcare API on GitHub tutorials never finish

I'm trying AutoML Vision of ML Codelabs on Cloud Healthcare API GitHub tutorials.
https://github.com/GoogleCloudPlatform/healthcare/blob/master/imaging/ml_codelab/breast_density_auto_ml.ipynb
I run the Export DICOM data cell code of Convert DICOM to JPEG section and the request as well as all the premise cell code succeeded.
But waiting for operation completion is timed out and never finish.
(ExportDicomData request status on Dataset page stays "Running" over the day. I did many times but all the requests were stacked staying "Running". A few times I tried to do from scratch and the results were same.)
I did so far:
1) Remove "output_config" since INVALID ARGUMENT error occurs.
https://github.com/GoogleCloudPlatform/healthcare/issues/133
2) Enable Cloud Resource Manager API since it is needed.
This is the cell code.
# Path to export DICOM data.
dicom_store_url = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores', dicom_store_id)
path = dicom_store_url + ":export"
# Headers (send request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
# output_config = {'output_config': {'gcs_destination': {'uri_prefix': jpeg_folder, 'mime_type': 'image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50'}}}
output_config = {'gcs_destination': {'uri_prefix': jpeg_folder, 'mime_type': 'image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50'}}
body = json.dumps(output_config)
resp, content = http.request(path, method='POST', headers=headers, body=body)
assert resp.status == 200, 'error exporting to JPEG, code: {0}, response: {1}'.format(resp.status, content)
print('Full response:\n{0}'.format(content))
# Record operation_name so we can poll for it later.
response = json.loads(content)
operation_name = response['name']
This is the result of waiting.
Waiting for operation completion...
Full response:
{
"name": "projects/my-datalab-tutorials/locations/us-central1/datasets/sample-dataset/operations/18300485449992372225",
"metadata": {
"#type": "type.googleapis.com/google.cloud.healthcare.v1beta1.OperationMetadata",
"apiMethodName": "google.cloud.healthcare.v1beta1.dicom.DicomService.ExportDicomData",
"createTime": "2019-08-18T10:37:49.809136Z"
}
}
AssertionErrorTraceback (most recent call last)
<ipython-input-18-1a57fd38ea96> in <module>()
21 timeout = time.time() + 10*60 # Wait up to 10 minutes.
22 path = os.path.join(HEALTHCARE_API_URL, operation_name)
---> 23 _ = wait_for_operation_completion(path, timeout)
<ipython-input-18-1a57fd38ea96> in wait_for_operation_completion(path, timeout)
15
16 print('Full response:\n{0}'.format(content))
---> 17 assert success, "operation did not complete successfully in time limit"
18 print('Success!')
19 return response
AssertionError: operation did not complete successfully in time limit
API Version is v1beta1.
I was wondering if somebody has any suggestion.
Thank you.
After several times kept trying and stayed running one night, it finally succeeded. I don't know why.
There was a recent update to the codelabs. The error message is due to the timeout in the codelab and not the actual operation. This has been addressed in the update. Please let me know if you are still running into any issues!

How to send a long url with get method to server

I have want to send a long url with GET, and I also need to add the url in the GET link,
like this(React):
let urlParam = {
apply_user: this.state.apply_user,
order_id: this.state.order_id,
begin_date: this.state.begin_date,
end_date: this.state.end_date,
status: this.state.status,
execute_mode: this.state.execute_mode,
page: page,
pageSize: this.state.pageSize,
otype:'all',
}
history.pushState(null, null, '?'+concatURLParams(urlParam))
then I can get all message from the url, like this :
http://cmdb.server.com/page/machine/list/?page=1&ips=172.17.10.3%20172.17.10.4%20172.17.10.9%20172.17.10.10
but in fact, the param is so long , there are lots of ip need to send.
And I use uwsgi to run my django project, I set buffer-size to 65536. official doc tell me the max size is 64k, but it got this, too.
invalid uwsgi request (current strsize: 21600). skip.
[pid: 15947|app: -1|req: -1/7] () {0 vars in 31 bytes} [Mon Jan 14 11:18:52 2019] => generated 0 bytes in 0 msecs ( 500) 0 headers in 0 bytes (0 switches on core 0)
this is my uwsgi.ini:
[uwsgi]
socket=127.0.0.1:9090
chdir=/home/ops/cmdb_futu/jumpserver
module=jumpserver.wsgi
master=true
buffer-size=65536
vacuum=true
processes=8
max-requests=2000
chmod-socket=664
vacuum=true
pidfile=uwsgi.pid
and I set nginx large_client_header_buffers and client_header_buffer_size to 64k, too.
But I don't make it for a long time. someone know why and help me.

Is possible to avoid the 60 seconds limit in urllib2.urlopen with GAE?

I'm requesting a file with a size around 14MB from a slow server with urllib2.urlopen, and it spend more than 60 seconds to get the data, and I'm getting the error:
Deadline exceeded while waiting for HTTP response from URL:
http://bigfile.zip?type=CSV
Here my code:
class CronChargeBT(webapp2.RequestHandler):
def get(self):
taskqueue.add(queue_name = 'optimized-queue', url='/cronChargeBTB')
class CronChargeBTB(webapp2.RequestHandler):
def post(self):
url = "http://bigfile.zip?type=CSV"
url_request = urllib2.Request(url)
url_request.add_header('Accept-encoding', 'gzip')
urlfetch.set_default_fetch_deadline(300)
response = urllib2.urlopen(url_request, timeout=300)
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
...work with the data insiste the file...
I create a cron task who calls CronChargeBT. Here the cron.yaml:
- description: cargar BlueTomato
url: /cronChargeBT
target: charge
schedule: every wed,sun 01:00
and it create a new task and insert into a queue, here the queue configuration:
- name: optimized-queue
rate: 40/s
bucket_size: 60
max_concurrent_requests: 10
retry_parameters:
task_retry_limit: 1
min_backoff_seconds: 10
max_backoff_seconds: 200
Of coursethat the timeout=300 isn't working because the 60seconds limit in GAE but I think yhat I can avoid it using a task... anyone knows how I can get the data in the file avoiding this timeout.
Thanks a lot!!!
Cron jobs are limited to 10 minutes deadline, not 60 seconds. If your download fails, perhaps just retry? Does the download work if you download it from your computer? There's nothing you can do on GAE if the server you are downloading from is too slow or unstable.
Edit: According to https://cloud.google.com/appengine/docs/java/outbound-requests#request_timeouts, there is a maximum deadline of 60 seconds for cron job requests. Therefore, you can't get around it.

Jmeter-Regular expression extractor

My Jmeter response returns me 'Location' in the response header.I want to fetch this Location header and use it on my other requests.
Sample Start: 2015-07-24 14:46:38 CEST
Load time: 163
Latency: 163
Size in bytes: 372
Headers size in bytes: 350
Body size in bytes: 22
Sample Count: 1
Error Count: 1
Response code: 201
Response message: Processed
Response headers:
HTTP/1.1 201 Processed
X-Backside-Transport: OK OK,FAIL FAIL
Connection: Keep-Alive
Transfer-Encoding: chunked
****Location: /retail/iows/ie/en/storage/servicedocs/paxplanner/2015-07-24/eCommerce.pdf****
X-Client-IP: 127.0.0.1,10.62.26.150
Content-Type: application/octet-stream
Date: Fri, 24 Jul 2015 12:46:38 GMT
X-Archived-Client-IP: 127.0.0.1
Steps I followed:
I have used Regular expression extractor.
Enabled response header radio button with the whole location header.
Please help me to sort it out.
If you want to retrieve the Location field's value from the request's response, you might want to try the following pattern: Location:([^\r?\n]+), the first matching group will contain the value of the Location field.
Above expression is based in the following rules:
HTTP header fields are colon (":") separated <key, value> pairs.
HTTP header fields are terminated by the EOL char combination (CR and LF)
Please try this..
Location:([\s\S]*)X-Client
If it doesn't work then try to use a \ before - in X-Client (escaping -)