I have a Tornado web server. I wonder if there is anyway I can control the number of incoming requests? I want to only accept x number of requests from a single client in a given timeframe.
Set a cookie with the expiry as the timeframe you want to be and use this cookie to keep count of requests.
code sample:
lets say you want the timeframe to be of one day, so here is how you would set the cookie. Do this when user logged in (or after any action you want):
set_secure_cookie('requestscount', '0', expires_days=1)
and then check the count value before giving access to the resource:
user_requests = int(get_secure_cookie('requestscount'))
if user_requests < MAX_USER_REQUESTS:
user_requests += 1
set_secure_cookie('requestscount', str(user_requests), expires_days=1)
# serve the resource to user
...
Of course there are other ways. You could keep this count in a database instead of a cookie.
Related
I have created one simple JMeter script in V5.3. There I am using HTTP cookie and cache manager in test plan level and I am selecting both clear cookie/cache on each iteration. Under that I have kept one thread group with a HTTP sampler inside it. In the thread group level I have selected "Same User On Each Iteration" and executed it for 1 thread and 3 loop counts.
Will it behave as the same user in all the 3 iterations or treat them as different users as we have already selected clear cookie and cache on each iteration from test plan level?
Why would you do mutually exclusive things?
If you want cookies and cache to remain between Thread Group iterations - tick Same user on each iteration box
And vice versa, if you want to clear cookies and cache - don't tick this box
Going forward:
If you tick Same User On Each Iteration box on Thread Group level and Clear cache each iteration box on HTTP Cache Manager level the HTTP Cache Manager will override the thread group settings
More information:
HTTP Cache Manager
HTTP Cookie Manager
Introducing JMeter 5.2!
I have a threadgroup with 100 thread (users), and loop count of 10.
I have a cookie manger with default setting.
The users are anonymous (not logged in), but I want to track the number of users hitting the site in application insights, as it will generate new .net session tokens.
When I run the test, i would expect cookies to be local to each threads loop iteration.
So I would expect cookies to be "cleared" on each thread 10 times, and so I would expect to generate 1,000 .net session cookies on my application.
however, I don't, I see 1.
In the cookie manager, there are two options:
Clear cookies each iteration
Use thread group configuration to control cookie clearing.
Both are unchecked.
But this makes no sense - I want the cookies to be cleared on each iteration for each user.
Should I check one or both of these? do I need to set anything on the thread group?
In the thread group, I have "user same user on each iteration" unchecked - each iteration should be considered a new user.
Also, does it mater where the cookie manger goes? I have always put it at the top, above the threadgroup, but perhaps it is supposed to go under the thread group?
Ideas?
100 * 10 gives 1000, not 10000
There are 2 ways on how you can clear the cookies each iteration:
Tick this box:
Or tick the other and untick Same user on each iteration on Thread Group level
I would go for the latter option as this allows controlling i.e. HTTP Cache Manager and HTTP Authorization Manager as well
HTTP Cookie Manager considers only Thread Group iteration as iteration, other loop sources like Loop Controller or While Controller are not taken into consideration
You might consider placing your HTTP Cookie Manager(s) in order to limit its(their) scope according to your test scenario
You can add to first request JSR223 PreProcessor with code that will clear cookies on each start of iteration
sampler.getCookieManager().clear()
I did some experimentation. I got the right results after doing this:
put the cookie manager under the threadgroup. Before I had it above.
set "Use thread group config to control cookie clearing" in cookie manger
In threadgroup, make sure "use same user on each iteration" was unchecked.
Now for each thread, and each iteration, i see new session cookies, and these are preserved just for that thread and iteration.
I have a website. the user is authorized, enters the site URL, then sets the interval in minutes (for example, 7 minutes). Then the user leaves the site.After 7 minutes, the program, the script, the service should start, I do not know how it's called and perform certain actions with the site that the user specified and then send the result to the mail. Tell me how can I do this service?What would it work even if the user came out and closed the browser. I can not understand in what direction I should move ... I use AWS from Amazon
UPD: let's describe in more detail. There is a login field, the user enters the login / password, the data is checked in a database called users, cookies are set with the user id (idUser), then the user enters one or more sites, they are stored in a database named data_ (idUser). The interval is stored in settings_ (idUser) value in the range 1-60 min. Suppose he sets the interval of 7 minutes. Then the user closes the tab, closes the browser. A specified interval (7 minutes) starts a script that takes data from the database data_ (idUser), (there are several URL sites stored there). The script processes them and sends the results of site verification to the mail. But the problem is also that the script will be one, and how to access the bd if I do not know idUser, because I can not get them from the cookie either ... Maybe I should change the database structure altogether?
I have a Java web app hosted on Google App Engine (GAE). The User clicks on a button and he gets a data table with 100 rows. At the bottom of the page, there is a "Make Web service calls" button. Clicking on that, the application will take one row at a time and make a third party web-service call using the URLConnection class. That part is working fine.
However, since there is a 60 second limit to the HttpRequest/Response cycle, all the 100 transactions don't go through as the timeout happens around row 50 or so.
How do I create a loop and send the Web service calls without the User having to click on the 'Make Webservice calls' more than once?
Is there a way to stop the loop before 60 seconds and then start again without committing the HttpResponse? (I don't want to use asynchronous Google backend).
Also, does GAE support file upload (to get the 100 rows from a file instead of a database)
Thank you.
Adding some code as per the comments:
URL url = new URL(urlString);
HttpURLConnection connection = (HttpURLConnection) url
.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("POST");
connection.setConnectTimeout(35000);
connection.setRequestProperty("Accept-Language", "en-US,en;q=0.5");
connection.setRequestProperty("Authorization", encodedCredentials);
// Send post request
DataOutputStream wr = new DataOutputStream(
connection.getOutputStream());
wr.writeBytes(submitRequest);
It all depends on what happens with the results of these calls.
If results are not returned to a UI, there is no need to block it. You can use Tasks API to create 100 tasks and return a response to a user. This will take a few seconds at most. The additional benefit is that you can make up to 10 calls in parallel by using tasks.
If results have to be returned to a user, you can still use up to 10 threads to process as many requests in parallel as possible. Hopefully, this will bring your time under 1 minute, but you cannot guarantee it since you depend on responses from third-party resources which maybe unavailable at the moment. You will have to implement your own retry mechanism.
Also note that users are not accustomed to waiting for several minutes for a website to respond. You may consider a different approach when a user is notified after the last request is processed without blocking your client code.
And yes, you can load data from files on App Engine.
Try using asynchronous urlfetch calls:
LinkedList<Future<HttpResponse>> futures;
// Start all the request
for (Url url : urls) {
HttpRequest request = new HttpRequest(url, HTTPMethod.POST);
request.setPayload(...)
futures.add(urlfetchservice.fetchAsync(request);
}
// Collect all the results
for (Future<HttpResponse> future : futures) {
HttpResponse response = future.get()
// Do something with future
}
I am thinking of a rest web service that ensure for every request sent to him that :
The request was generated by the user who claim it ;
The request has not been modified by someone else (uri/method/content/date);
For GET requests, it should be possible to generate a URI with enough information in it to check the signature and set a date of expiration. That way a user can delegate temporary READ permissions to a collaborator for a limited time period on a ressource with a generated URI.
Clients are authenticated with id and a content-signature based on their password.
There should be no session at all, and so server state ! The server and the client share a secret key (a password)
After thinking about it and talking with some really nice folks, it seems there is no rest service existing to do that as simple as it should be for my use case. (HTTP Digest and OAuth can do this with server state and are very chatty)
So I Imagined one, and I'm asking your greats comments on how it should be designed (I will release it OpenSource and Hope it can help others).
The service use a custom "Content-signature" header to store credentials. An authenticated request should contains this header :
Content-signature: <METHOD>-<USERID>-<SIGNATURE>
<METHOD> is the sign method used, in our case SRAS.
<USERID> stands for the user ID mentioned earlier.
<SIGNATURE> = SHA2(SHA2(<PASSWORD>):SHA2(<REQUEST_HASH>));
<REQUEST_HASH> = <HTTP_METHOD>\n
<HTTP_URI>\n
<REQUEST_DATE>\n
<BODY_CONTENT>;
A request is invalidated 10 minutes after it has been created.
For example a typical HTTP REQUEST would be :
POST /ressource HTTP/1.1
Host: www.elphia.fr
Date: Sun, 06 Nov 1994 08:49:37 GMT
Content-signature: SRAS-62ABCD651FD52614BC42FD-760FA9826BC654BC42FD
{ test: "yes" }
The server will answer :
401 Unauthorized
OR
200 OK
Variables would be :
<USERID> = 62ABCD651FD52614BC42FD
<REQUEST_HASH> = POST\n
/ressource\n
Sun, 06 Nov 1994 08:49:37 GMT\n
{ test: "yes" }\n
URI Parameters
Some parameters can be added to the URI (they overload the headers informations) :
_sras.content-signature=<METHOD>-<USERID>-<SIGNATURE> : PUT the credentials in the URI, not in the HTTP header. This allow a user to share a signed request ;
_sras.date=Sun, 06 Nov 1994 08:49:37 GMT (request date*) : The date when the request was created.
_sras.expires=Sun, 06 Nov 1994 08:49:37 GMT (expire date*) : Tell the server the request should not expire before the specified date
*date format : http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.18
Thanks for your comments.
There are several issues that you need to consider when designing a signature protocol. Some of these issues might not apply to your particular service:
1- It is customary to add an "X-Namespace-" prefix to non-standard headers, in your case you could name your header something like: "X-SRAS-Content-Signature".
2- The Date header might not provide enough resolution for the nonce value, I would therefore advise for a timestamp having at least 1 millisecond of resolution.
3- If you do not store at least the last nonce, one could still replay a message in the 10 minutes window, which is probably unacceptable on a POST request (could create multiple instances with same values in your REST web service). This should not be a problem for GET PUT or DELETE verbs.
However, on a PUT, this could be used for a denial of service attack by forcing to update many times the same object within the proposed 10 minutes window. On a GET or DELETE a similar problem exists.
You therefore probably need to store at least the last used nonce associated with each user id and share this state between all your authentication servers in real-time.
4- This method also requires that the client and servers be clock synchronized with less than 10 minutes skew. This can be tricky to debug, or impossible to enforce if you have AJAX clients for which you do not control the clock. This also requires to set all timestamps in UTC.
An alternative is to drop the 10 minutes window requirement but verify that timestamps increase monotonically, which again requires to store the last nonce. This is still a problem if the client's clock is updated to a date prior to the last used nonce. Access would be denied until the client's clock pass the last nonce or the server nonce state is reset.
A monotonically increasing counter is not an option for clients that cannot store a state, unless the client could request the last used nonce to the server. This would be done once at the beginning of each session and then the counter would be incremented at each request.
5- You also need to pay attention to retransmissions due to networks errors. You cannot assume that the server has not received the last message for which a TCP Ack has not been received by the client before the TCP connection dropped. Therefore the nonce needs to be incremented between each retransmission above the TCP level and the signature re-calculated with the new nonce. Yet a message number needs to be added to prevent double execution on the server: a double POST would result in 2 object being created.
6- You also need to sign the userid, otherwise, an attacker might be able to replay the same message for all users which nonces have not yet reached that of the replayed message.
7- Your method does not guaranty the client that the server is authentic and has not been DNS-hijacked. Server authentication is usually considered important for secure communications. This service could be provided by signing responses from the server, using the same nonce as that of the request.
I would note that you can accomplish this with OAuth, most notably "2-legged OAuth" where client and server share a secret. See https://www.rfc-editor.org/rfc/rfc5849#page-14. In your case, you want to omit the oauth_token parameter and probably use the HMAC-SHA1 signature method. There's nothing particularly chatty about this; you don't need to go through the OAuth token acquisition flows to do things this way. This has the advantage of being able to use any of several existing open source OAuth libraries.
As far as server-side state, you do need to keep track of what secrets go with which clients, as well as which nonces have been used recently (to prevent replay attacks). You can skip the nonce checking / lifetimes if you run things over HTTPS, but if you're going to do that, then HTTPS + Basic Auth gives you everything you described without having to write new software.