We are getting "Forbidden error(403)" while trying to upload data on Google cloud when there is a time skew on my machine i.e. my machine clock is not synchronized/updated with the NTP server.
Why does Google not return the proper error information?
It is very likely that you are setting the "Date" field incorrectly. All (signed) API v1.0 requests must include a "Date" header, and that header must be part of the signature for the request. The Date field must be within 15 minutes of the real clock time that Google's servers receive your request. If your clock is more than 15 minutes skewed, your signed requests will be rejected.
For more, please see the v1.0 API documentation here: https://developers.google.com/storage/docs/reference/v1/developer-guidev1#authentication under the CanonicalHeaders section.
This is also the case with S3. See here: http://aws.amazon.com/articles/1109?_encoding=UTF8&jiveRedirect=1#04
Related
My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration
I work for an Student Information System and we're using the Admin SDK directory API to create school districts Google Org Unit structures from within our software.
POST https://www.googleapis.com/admin/directory/v1/customer/customerId/orgunits
When generating these API requests we're consistently receiving dailyLimitExceeded errors even when the district's quota has not been reached.
This error can be bypassed by ignoring the error, and implementing an exponential back-off routine, but I believe this to be acting much more like the quotaExceeded error is intended to act rather than dailyLimitExceeded, in that the request succeeds afterward on the first retry of this request.
In detail, the test I just ran successfully completed 9 of these API calls and then I received this response on the 10th:
Google.Apis.Requests.RequestError
Quota limit exceeded for the day. [403]
Errors [Message[Quota limit exceeded for the day.] Location[ - ] Reason[dailyLimitExceeded] Domain[usageLimits]
From the start of the batch of API calls it took about 10 seconds to get to the point where the error occurred.
Thanks for your help!
What I would suggest is to slow down your API requests. Don't make like 10 requests in 1 second. Give it a space in between requests. You are correct to implement exponential backoff. Also, if you can, use other accounts as well to make requests.
We are working on setting up an API Management portal for one of our Web API. We are using eventhubs for logging the events and we are transferring the event messages to Azure Blob storage using Azure functions.
We would like to know how can we find the Time taken by API Management portal for providing the response for a message (we are capturing the time taken at the back end api layer but not from the API Management layer).
Regards,
John
The simpler solution is to enable Azure Monitor Diagnostic Logs for the Apimanagement service. You will get raw logs for each request including
durationMs - interval between receiving request line and headers from a client and writing last chunk of response body to a client. All writes and reads include network latency.
BackendTime - time spent waiting on backend response
ClientTime - time spent with client for request and response
CacheTime - time spent on fetching from cache
You can also refer this video.
Not the correct way of doing this, but still get an idea of how much time each request is taking. We can actually use the context variable to set the start time in the inbound policy node and then calculate the end time in the outbound node.
Is X-Amz-Expires a required header/parameter? Official documentation is inconsistent and uses it in some examples, while not in others.
If it is not required, what is the default expiration value of a signed request? Does it equal to the maximum possible value for X-Amz-Expires parameter, which is 604800 (seven days)?
The documentation (see above links) talks about X-Amz-Expires parameter only in context of passing signing parameters in a query string. If X-Amz-Expires parameter is required, is it only required for passing signing parametes in query string (as opposed to passing them with Authorization header)?
Update:
Introduction to AWS Security Processes paper, on page 17 says
A request must reach AWS within 15 minutes of the
time stamp in the request. Otherwise, AWS denies the request.
Now what time stamp are we talking about here? My guess is that it is X-Amz-Date. If I am correct, then another question crops up:
How do X-Amz-Date and X-Amz-Expires parameters relate to each other? To me it sounds like request expiration algorithm falls back to 15 mins from X-Amz-Date timestamp, if X-Amz-Expire is not present.
Is X-Amz-Expires a required header/parameter?
X-Amz-Expires is only used with query string authentication, not with the Authorization: header.
There is no default value with query string authentication. It is a required parameter, and the service will reject a request if X-Amz-Algorithm=AWS4-HMAC-SHA256 is present in the query string but X-Amz-Expires=... is not.
<Error>
<Code>AuthorizationQueryParametersError</Code>
...
Now what time stamp are we talking about here?
This refers to X-Amz-Date: when used with the Authorization: header. Because X-Amz-Date: is part of the input to the signing algorithm, a change in the date or time also changes the signature. An otherwise-identical request signed 1 second earlier or later has an entirely different signature. AWS essentially allows your server clock to be wrong by up to 15 minutes without breaking your ability to authenticate requests. It is not a fallback or a default. It is a fixed window.
The X-Amz-Date: of Authorization: header-based requests is compared by AWS to their system time, which is of course synched to UTC, and the request is rejected out if hand if this value is more than 15 minutes skewed from UTC when the request arrives. No other validation related to authentication occurs prior to the time check.
Validation of Query String authentication expiration involves different logic:
X-Amz-Expires must not be a value larger than 604800 or smaller than 0; otherwise the request is immediately denied without further processing, including a message similar to the one above.
X-Amz-Date must not be more than 15 minutes in the future, according to the AWS system clock. The error is Request is not yet valid.
X-Amz-Date must not be more than X-Amz-Expires number of seconds in the past, relative to the AWS system clock, and no 15 minute tolerance applies. The error is Request has expired.
If any of these conditions occur, no further validation is done on the signature, so these messages will not change depending on the validity of the signature. This is checked first.
Also, the leftmost 8 characters of your X-Amz-Date: must match the date portion of your Credential component of the Authorization: header. The date itself has zero tolerance for discrepancy against the credential (so, when signing, don't read your system time twice, else you risk generating an occasional invalid signature around midnight UTC).
Finally, requests do not expire while in the middle of processing. If you send a request using either signing method that is deemed valid when it arrives but would have expired very soon thereafter, it is always allowed to run to completion -- for example, a large S3 download or an EBS snapshot creation request will not start, then fail to continue, because the expiration timer struck while the request had already started on the AWS side. If the action was authorized when requested, then it continues and succeeds as normal.
I have a Java 7 "agent" program running on several client machines (mostly Windows XP). My "agent" uploads client files to Amazon S3 and often I get this error:
RequestTimeTooSkewed
I know this is because the client's computer system time difference is too large compared to Amazon's. Here's my problem: I can't control the client's computer (system) time! So, I don't want Amazon to care about time differences.
I heard about jets3t, but I'm hoping not having to resort to yet another tool (agent footprint must remain small).
Any ideas how to remove this check and get rid of this pesky error?
Error detail:
Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 59C9614D15006F23, AWS Error Code: RequestTimeTooSkewed, AWS Error Message: The difference between the request time and the current time is too large., S3 Extended Request ID: v1pGBm3ed2J9dZ3sG/3aDrG3DUGSlt3Ac+9nduK2slih2wyaAnc1n5Jrt5TkRzlV
The error is coming from the S3 service, not from the client, so there really isn't anything you can do other than correct the clock on the client. That check is being done on the service to help detect and prevent replay attacks so it's an important part of the overall security of the service.
Trying a different client-side SDK won't help.