Exclude headers from s3v4 signature calculation - amazon-web-services

We are using an onPrem S3 compatible storage server in an intranet network and we want to expose this intranet url to internet so we used a ReverseProxy with a mapping to the intranet url. When we test the intranet url it works perfectly but when we test the internet url we get the 403 error:
The request signature we calculated does not match the signature you provided. Check your Secret Access Key and signing method. For more information, see REST Authentication and SOAP Authentication for details. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 0a440c7f:15cc604b1e2:12d3af:24d; S3 Extended Request ID: null), S3 Extended Request ID: null
After debugging, we found that the proxy modifies the host header used to calculate the signature in order to redirect the request to the intranet url...
So my question is how to supress some headers from the V4 signature calculation using AWS SDK or Boto3 client. or is there a better architecture to expose an onPrem S3 service.
Thanks in advance.
Amir.

There are essentially two solutions to this.
The first one is easier: sign the request for the internal URL, then just use simple string prefix replacement to rewrite the host part of the signed URL to point it to the hostname of the external proxy. When the proxy rewrites the Host header, it will end up rewriting it back to exactly what you signed.
It is, I assume, common knowledge that signed URLs are immune to tampering, for all practical purposes: you can't change anything about a signed URL without invalidating it... but that's not what this is. The change is temporary, and the proxy's net effect is to undo the change.
The alternate solution requires the proxy or another service in the chain (before the storage service) to know the signing keys and secrets, so that it can first validate the incoming request, and if valid, modify the request and then generate a new signature that the service will accept. I once wrote a service to do this so that when a request was for HEAD, the proxy would use the same key and secret (which it knew) to generate a signature for the same request, but with GET. If it matched the signature in the incoming request, the proxy would replace the existing signature with a signature for a HEAD request -- thus allowing the client to use a URL originally signed for a GET request, to make either a GET or a HEAD request -- something S3 does not natively support, since a GET and a HEAD for the same object require two different signed URLs. The concept is the same, though -- generate a signature in the proxy for what the client is requesting, to validate the incoming signature, and then re-sign the request as needed. The solution I built used HAProxy's Lua integration to examine and modify the request in flight.

Related

Is there any way to Block request from Postman or other apps to call Restful API

Infra of system
Expected:
I want to block requests, which is not from Server FE (domain.com)
Ex: Users make request from another apps such as Postman -> it will response 403, message access denied.
I used the rules of ALB, it works but users can cheat on Postman
Also I use AWS WAF to detect request. But it's not work.
Is there any way to block request from Postman or another apps?
We can generate secret_key and check between Server FE and Server BE. But users can see it on Headers and simulator the headers on Postman and call API success.
Current Solution:
I use Rule of Application Load Balancer to check Host and Origin. But users can add these params on Postman and request success.
Rule ALB
When I add Origin matching value (set on ALB) -> We can request successful
Postman success
Postman denied
Users can cheat and call API success.
Thanks for reading. Please help me give any solution for this one. Thanks a lot.
No. HTTP servers have no way to know what client is being used to make any HTTP request. Any HTTP client (Browsers, PostMan, curl, whatever) is capable of making exactly the same requests as each other.
The user-agent header is a superficial way to do this, but it's easy enough for PostMan or any other HTTP client to spoof the user-agent header to one that makes the request look like it is coming from a web browser agent.
You can only make it more challenging to do so. Some examples to thwart this behavior includes using tools like Google captcha or CloudFlare browser integrity check, but they're not bulletproof and ultimately aren't 100% effective at stopping people from using tools/automation to access your site in unintended ways. At the end of the day, you're limited to what can be done with HTTP, and PostMan can do everything at the HTTP layer.

AWS API Gateway Custom Domain not passing the user-agent

I have a custom domain example.com that is redirecting to my API gateway api-example.com, but it doesn't seem to pass the user-agent field, all my user-agent values are AmazonAPIGateway_5rfp2g9h9b.
If I call directly the api-example.com then it works fine, but if I call example.com, doesn't work.
Any idea on how I could pass the correct user-agent HTTP Header?
Thanks
It’s not clear what you mean by redirect or the domains you have listed, so you have two custom domains ? And if so how did you do that, Cloudfront with a custom origin? And what type of integration request do you have? Is this a REST or HTTP API? Probably why you are getting down voted because you don’t have any detail and the domains don’t make sense.
Either way in your API make sure you have the user-angent field defined where it is applicable:
Request Part of your API, and make sure your integration request is forwarding this header
Likewise make sure Cloudfront forwards the ‘user-agent’ header, that it is also whitelisted if you are using Cloudfront
Note this header comes from your Web browser or SDK being used sometimes sets this too. So if you don’t set this header for whatever reason that could be a problem, I don’t know if for example when you say from this domain that means you are using a hosted website, and another means making a request from Postman, etc.
Short answer: Validate the contents of your header
Ref AWS user-agent redirect here.. as listed below.
Redirects and HTTP user-agents:
..Programs that use the Amazon S3 REST API should handle redirects either at the application layer or the HTTP layer. Many HTTP client libraries and user agents can be configured to correctly handle redirects automatically; however, many others have incorrect or incomplete redirect implementations.
Before you rely on a library to fulfill the redirect requirement, test the following cases:
Verify all HTTP request headers are correctly included in the redirected request (the second request after receiving a redirect) including HTTP standards such as Authorization and Date.
Verify non-GET redirects, such as PUT and DELETE, work correctly.
Verify large PUT requests follow redirects correctly.
Verify PUT requests follow redirects correctly if the 100-continue response takes a long time to arrive.
HTTP user-agents that strictly conform to RFC 2616 might require explicit confirmation before following a redirect when the HTTP request method is not GET or HEAD. It is generally safe to follow redirects generated by Amazon S3 automatically, as the system will issue redirects only to hosts within the amazonaws.com domain and the effect of the redirected request will be the same as that of the original request...
Optional/Additional help, I was trying to understand your description, if you're going across domains, thats CORS.
Please consider CORS which you seem to be missing, please see configuration
here.
Also very important you Enabling CORS support for a resource and its methods does not recursively enable it for child resources and their methods.
If you want to setup your custom header for
user-agent
Setup CORS in Console
How to setup from console under the resources enable the CORS.
Setup your Headers
As a last step you have to REdeploy to a stage, for the settings to take effect!

AWS API Gateway expects the request URL to be encoded twice

My API is a request that can potentially have spaces in the pathParameters.
/data/{id}/hello/{Some message with a space}.
A sample request would be /data/23/hello/Say%20Hi
My angular code from the frontend encodes the request URL that is sent out to the AWS API Gateway but I get the following error.
`The Canonical String for this request should have been
'GET
/data/23/hello/Sayr%2520Hi`
My API gateway has a velocity template the decodes the parameters using $util.urlDecode()
I'm facing the same problem.
I've been stuck for a day.
If you are using HttpApi it cannot be solved.
Nevertheless, if you use RestApi I managed to make this work.
Specifically, you should use the URL Path Parameters.
You should:
Add a resource containing the /{variable}
Add a Url Path Parameter in the Integration Request Configuration with name variable and mapped from method.request.path.variable
Notice that the solution may depend on the integration type that you are using.
In the screenshot below you can see how I'm redirecting all the received traffic to a NetworkLoadBalancer.
The resource has the variable /{proxy+}, the endpoint URL has the {proxy}, and, in the URL Path Parameters, I've configured the mapping method.request.path.proxy.

API Gateway Returns Forbidden when string with "https://" is Posted

I have an API Gateway endpoint setup that uses a Lambda function to store a URL in DynamoDB. When I POST a message with this in the body
"videoURL": "www.youtube.com/watch?v=cgpvCVkrV6M"
the endpoint works fine. It returns 200 and the DynamoDB record is updated. However, when I POST this
"videoURL": "https://www.youtube.com/watch?v=cgpvCVkrV6M"
the endpoint returns a 403 Forbidden response and the DB record is not updated.
When I test inside API Gateway, the "https://" string is accepted.
I also have an API Key, a Usage Plan, a Client Certificate, and CORS Enabled (for local testing). I don't think any of these are the cause of my problem.
Does anyone have a guess as to why the "https://" string is causing a problem?
The problem was in my Web Application Firewall (WAF). When I created my firewall, I added the AWS-AWSManagedRulesCommonRuleSet collection. According to the documentation of this rule set, one of the rules is:
GenericRFI_BODY - Inspects the values of the request body and blocks requests attempting to exploit RFI (Remote File Inclusion) in web applications. Examples include patterns like ://.
Disabling this rule solved my problem. I can now successfully send in and store "https://" in my database.
However, this rule represents a best practice (or at least a good practice), and should not be disabled without considering the risk. By disabling this rule, I make my endpoint vulnerable Remote File Inclusion attacks. Since I have access to the endpoint and Lambda function definition, I could split my URL input in to two fields ("https" and "www.youtube...") and keep the rule enabled. For anyone else encountering this issue, you'll have to weigh the ease vs. risk of each approach.

Can you statically set a Header and it's value when setting up an HTTP Proxy using AWS Api Gateway endpoint?

I am creating an http proxy using AWS Api Gateway. I would like to hard code some of the headers and their values to be forwarded as part of the request. I thought this might be possible in the 'Integration Request' portion of the proxy setup, but I can't seem to figure it out.
I'm trying to pass an Authorization header with an oauth key. I don't want to share this key with clients that have access to this service, since I will only provide a subset of access to users of this specific endpoint.
In the Integration Request, you can configure a static header value to be sent to the integration endpoint by putting the value inside of single quotes, e.g. 'my_static_header_value'.
Is it a problem to put those hardcoded headers in the request body ? It not, you could just use a template (in the integration request screen) :
{
"hardcoded_header": "$input.params('hardcoded_header')"
}
Hope this helps.