I am in the intruder tab of Burp Suit Free Edition v1.7.03
I'm able to make an attack which is generally a HTTP request but what I want is HTTP request should be made every 10 minutes instead of default 3 seconds time interval (approx)
I don't know where the 3 seconds default came from, I think Burp tries as fast as possible by default. But anyways, I think you can set it with the Throttle settings under Intruder/Options/Request Engine. You need to set the Number of Threads to 1, and set Throttle to 600000, like this:
Setting throttle to 10 minutes
Related
So my issue is rather simple in all honesty. I'm trying to see if there is a way to trigger Lifecycle Events within AWS IoT much quicker. So far my code is as follows on connect:
mqttc.connect(aws_iot_endpoint, port=443, keepalive=1)
The value for keepalive cannot be lower than 1, as it's not enough time for the thing to connect to AWS. When connection to the device is lost it takes approximately 7 to 8 seconds for AWS IoT to send out this message:
MQTT_KEEP_ALIVE_TIMEOUT
I was wondering if there is any way to decrease that time even further? Is using AWS IoT Events the way forward?
If your keep-alive is set to 1 second, then MQTT_KEEP_ALIVE_TIMEOUT should be 1.5x which is 1.5 seconds, not 7-8 seconds.
Make sure that you're also setting your ping timeout (in ms) to a value shorter than 1000ms as otherwise, AWS may just default to 3 seconds for ping timeout.
Keep Alive cannot be set to 1 sec per AWS docs. Values less than 30 are set to 30.
The default keep-alive interval is 1200 seconds. It is used when a client requests a keep-alive interval of zero. If a client requests an interval > 1200 seconds, the default interval is used. If a client requests a keep-alive interval < 30 seconds but > zero, the server treats the client as though it requested a keep-alive interval of 30 seconds.
I want to use Concurrency thread group, so I'm using this configuration
Why I'm expecting is to send 10 requests in 5 seconds, and hold them for 1 second but the result after running my script is this, more than 10 http request are send.
How can I control only send 10 requests?
Thank you.
A similar behaviour happens with Ultimate thread group
You're not sending 10 requests in 5 seconds, you're launching 5 threads (virtual users) in 5 seconds, to wit JMeter will add 2 virtual users each second for 5 seconds and then hold the load for 1 second.
The actual number of requests which will be made depends on your application response time, higher response time - less requests, lower response time - more requests.
If you want to send exactly 10 requests in 5 seconds evenly distributed go for the following configuration:
Normal Thread Group with users * loops = 10, to wit:
10 users - 1 loop
5 users - 2 loops
etc.
Throughput Controller in Total Executions mode and Throughput set to 10
HTTP Request
Throughput Shaping Timer configured to send 2 requests per second
I have a path (mysite.com/myapiendpoint for sake of example) that is both resource intensive to service, and very prone to bot abuse. I need to rate limit access to that specific path to something like 10 requests per minute per client IP address. How can this be done?
I'm hosting off an EC2 instance with CloudFront and AWS WAF in front. I have the standard "Rate Based Rule" enabled, but its 2,000 requests per minute per IP address minimum is absolutely unusable for my application.
I was considering using API Gateway for this, and have used it in the past, but its rate limiting as I understand it is not based on IP address, so bots would simply use up the limit and legitimate users would constantly be denied usage of the endpoint.
My site does not use sessions of any sort, so I don't think I could do any sort of rate limiting in the server itself. Also please bear in mind my site is a one-man-operation and I'm somewhat new to AWS :)
How can I limit the usage per IP to something like 10 requests per minute, preferably in WAF?
[Edit]
After more research I'm wondering if I could enable header forwarding to the origin (running node/express) and use a rate-limiter package. Is this a viable solution?
I don't know if this is still useful to you - but I just got a tip from AWS support. If you add the rate limit rule multiple times, it effectively reduces the number of requests each time. Basically what happens is each time you add the rule, it counts an extra request for each IP. So say an IP makes a single request. If you have 2 rate limit rules applied, the request is counted twice. So basically, instead of 2000 requests, the IP only has to make 1000 before it gets blocked. If you add 3 rules, it will count each request 3 times - so the IP will be blocked at 667 requests.
The other thing they clarified is that the "window" is 5 minutes, but if the total is breached anywhere in that window, it will be blocked. I thought the WAF would only evaluate the requests after a 5 minute period. So for example. Say you have a single rule for 2000 requests in 5 minutes. Say an IP makes 2000 requests in the 1st minute, then only 10 requests after that for the next 4 minutes. I initially understood that the IP would only be blocked after minute 5 (because WAF evaluates a 5 minute window). But apparently, if the IP exceeds the limit anywhere in that window, it will be locked immediately. So if that IP makes 2000 requests in minute 1, it will actually be blocked from minute 2, 3, 4 and 5. But then will be allowed again from minute 6 onward.
This clarified a lot for me. Having said that, I haven't tested this yet. I assume the AWS support techie knows what he's talking about - but definitely worth testing first.
AWS have now finally released an update which allows the rate limit to go as low as 100 requests every 5 minutes.
Announcement post: https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Using rule twice will not work, because WAF rate based rule will count on cloud watch logs basis, both rule will count 2000 requests separately, so it would not work for you.
You can use AWS-WAF automation cloud front template, and choose lambda/Athena parser, this way, request count will perform on s3 logs basis, also you will be able to block SQL,XSS and bad bot requests.
So I am trying to use a web service on my Apache server and when I send a request to the service. I should be receiving about 9,000 items packed in xml format with multiple properties for each.
The problem I believe is when make this request, it takes so long to process the response that the server times out the request and I never receive anything. when making a request for about 1000 items it takes about 7 seconds. I believe there is a limit to 60 seconds somewhere in the server as 9000 if linear would be about 63 seconds which is just past this 1 minute limit.
Anyone got an idea on this problem?
You can try bumping up the connectionTimeout parameter to a higher number. Its set to 60 seconds by default.
I am looking for a network AAA (authentication, authorization, accounting) protocol that that manage concurrent network resource accessing from one account. An account, say, is logged in by two users concurrently, how can I distribute the session timeout of the account between the two users?
I am assuming you are not looking for the specific AAA functionality as used by telecommunications companies, but rather, RADIUS on steroids. Perhaps the easiest way to do this is to put something like FreeRADIUS on steroids.
I'll assume your particular NAS device (Wifi hub, packet gateway, etc) supports the following RADIUS records.
Access Request
Access Accept/Reject
Accounting Start
Accounting Stop
Interim Accounting
Session Disconnect
When you get a session start, let FreeRADIUS run some sort of script or log that start into the database. This is your clock start for each user. Even if the user logs in three times, you'll get start messages. When they log out for each session, you'll get a session stop. At a minimum, simply run the database and compute the deltas and apply the accounting rules to that user. If that user used 10, 20 and 30 minutes in concurrent sessions, you'll get stop records showing 10, 20 and 30 minutes.
This works, but it doesn't go quite far enough. First, if the sessions are long, you won't know about the time of those sessions until they terminate. That could be days from now. This is where the accounting records, particularly the interim accounting records come in. If your NAS supports it, you can tell it to generate an interim accounting record for a session, say, every 30 minutes. Thus, if a session lasts 30 minutes or less, you'll get the start and stop records. If a session lasts 45 minutes however, you'll get:
A start record at time 0
An interim accounting update at time 30
A stop record at time 45
It's not really the AAA you care about -- any RADIUS server likely will do the job -- FreeRADIUS, OpenRADIUS, Microsoft RADIUS server. It's your NAS device. If it can't send the records, you can't process them.