Akamai CDN - Whitelist service by Request header or User agent - akamai

We are using Akamai CDN as our load balancer and it also servers as a gatekeeper for requests.
We usually consume 3rd party services and in those cases whitelist their IP to be accessed in our servers. The service we are currently using cannot share IP since it is on cloud and keeps changing. They can either provide host name or Custom request header or a user agent.
I tried adding host entry but that did not work. Any idea how to add custom request header or user agent?

Depending on the product you're using to deliver with Akamai, the solution is to add a "Modify Outgoing Request Header" behavior.
This is what the "Add Behavior" tool looks like when you filter behaviors on "header".
By default, this allows you to specify a User-Agent header to go to your origin with the request. You can also specify a custom header name and value using that Behavior.
If you make it part of your Default Rule, then every single request that goes through that property will have the custom header.

Related

AWS API Gateway Custom Domain not passing the user-agent

I have a custom domain example.com that is redirecting to my API gateway api-example.com, but it doesn't seem to pass the user-agent field, all my user-agent values are AmazonAPIGateway_5rfp2g9h9b.
If I call directly the api-example.com then it works fine, but if I call example.com, doesn't work.
Any idea on how I could pass the correct user-agent HTTP Header?
Thanks
It’s not clear what you mean by redirect or the domains you have listed, so you have two custom domains ? And if so how did you do that, Cloudfront with a custom origin? And what type of integration request do you have? Is this a REST or HTTP API? Probably why you are getting down voted because you don’t have any detail and the domains don’t make sense.
Either way in your API make sure you have the user-angent field defined where it is applicable:
Request Part of your API, and make sure your integration request is forwarding this header
Likewise make sure Cloudfront forwards the ‘user-agent’ header, that it is also whitelisted if you are using Cloudfront
Note this header comes from your Web browser or SDK being used sometimes sets this too. So if you don’t set this header for whatever reason that could be a problem, I don’t know if for example when you say from this domain that means you are using a hosted website, and another means making a request from Postman, etc.
Short answer: Validate the contents of your header
Ref AWS user-agent redirect here.. as listed below.
Redirects and HTTP user-agents:
..Programs that use the Amazon S3 REST API should handle redirects either at the application layer or the HTTP layer. Many HTTP client libraries and user agents can be configured to correctly handle redirects automatically; however, many others have incorrect or incomplete redirect implementations.
Before you rely on a library to fulfill the redirect requirement, test the following cases:
Verify all HTTP request headers are correctly included in the redirected request (the second request after receiving a redirect) including HTTP standards such as Authorization and Date.
Verify non-GET redirects, such as PUT and DELETE, work correctly.
Verify large PUT requests follow redirects correctly.
Verify PUT requests follow redirects correctly if the 100-continue response takes a long time to arrive.
HTTP user-agents that strictly conform to RFC 2616 might require explicit confirmation before following a redirect when the HTTP request method is not GET or HEAD. It is generally safe to follow redirects generated by Amazon S3 automatically, as the system will issue redirects only to hosts within the amazonaws.com domain and the effect of the redirected request will be the same as that of the original request...
Optional/Additional help, I was trying to understand your description, if you're going across domains, thats CORS.
Please consider CORS which you seem to be missing, please see configuration
here.
Also very important you Enabling CORS support for a resource and its methods does not recursively enable it for child resources and their methods.
If you want to setup your custom header for
user-agent
Setup CORS in Console
How to setup from console under the resources enable the CORS.
Setup your Headers
As a last step you have to REdeploy to a stage, for the settings to take effect!

Mapping Host Header to custom header for secondary origin

I'm looking for a way to pass the requesting host header onto either the API Gateway or a custom endpoint (outside of amazon) from a cloudfront origin.
Essentially I have multiple domains mapped to a cloudfront catchall and I'm trying to pre-render based off the index request on the server while letting all other resources through.
IF this is not possible, would lambda edge be able to achieve such a thing?
Thanks!
Until such time as Lambda#Edge leaves preview, here's your workaround:
For each domain name, create a separate CloudFront distribution, and add a unique custom origin header.
If you've configured more than one CloudFront distribution to use the same origin, you can specify different custom headers for the origins in each distribution and use the logs for your web server to distinguish between the requests that CloudFront forwards for each distribution.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html
It should go without saying that "use the logs for your web server" is only one possible use for this value. You can also use it to identify which domain the request is for, by inspecting the inserted request header.
For example, for the site api-42.example.com, add a custom origin header X-Forwarded-Host with the static value the same as the hostname, api-42.example.com.
CloudFront adds the custom origin header to each request when sending it to the origin server.
If the client, for whatever reason, sends the same header, CloudFront discards what the client sent, before adding your header and value to each request.
Since the actual CloudFront distributions themselves are free, there's no real harm in this solution. If you need to create a lot of them, that's easily scripted with aws-cli. By default, accounts can create 200 different distributions, but you can submit a free support request to increase that limit.
You may now be contemplating the impact of this on your cache hit rate, since the different sites wouldn't share a common cache. That's a valid concern, but the impact may not be as substantial as you expect, for a variety of reasons -- not the least of which is that CloudFront's cache is not monolithic. If you have viewers hitting a single distribution but from two different parts of the world, those users are almost certainly connecting to different CloudFront edge locations, thus hitting different cache instances anyway.

How to add basic logic in AWS S3 or cloudfront?

Let's say I have two files: one for safari and one for Firefox.
I want to check User-Agent and return file based on the User-Agent.
How do I do this without adding external server?
You can't do this without adding an extra server.
S3 supports static content. It does not¹ vary its response based on request headers.
CloudFront relies on the origin server if content needs to vary based on request headers. Note that by default, CloudFront doesn't forward most headers to the origin, but this can be changed in the cache behavior configuration. If you forward the User-Agent header to the origin, your cache hit rate drops dramatically, since CloudFront has no choice but to assume any and every change in the user agent string could trigger a change in the response, so an object in the cache that was requested by a specific user agent string will only be served to a future browser with an identical user agent string. It will cache each different copy, but this still hurts your hit rate. If you only want to know the general type of browser, CloudFront can inject special headers to tell the origin whether the user agent is desktop, smart-tv, mobile, or tablet, without actually forwarding the user agent string and causing the same negative impact on the cache hit ratio.
So CloudFront will correctly cache the appropriate version of a page for each unique user agent... but the origin server must implement the actual content selection logic. And when the origin is S3, that isn't supported -- unless you have a server between CloudFront and S3. This is a perfectly valid configuration -- I have such a setup, with a server that rewrites the request path received from CloudFront before sending the request to S3, then returns the content from S3 back to CloudFront, which returns the content to the browser.
AWS Lambda would be a potential candidate for an application like this, acting as the necessary server (a serverless server, if you will) between CloudFront and S3... but it does not yet suport binary data, so for anything other than text, that isn't an option, either.
¹At least, not in any sense that is relevant, here. Exceptions exist for CORS and when access is granted or denied based on a limited subset of request headers.

Third party code on subdomain

As the owner of domain example.com with many content what security risks arising from providing subdomain to third party company. We don't want to share any of the content and the third company would have complete control over the application and machine hosting the subdomain site.
I'm concerned mainly about:
Shared cookies
We have cookies .example.com, so there will be sent also in the requests to subdomain. Is it possible for us to point A record to reverse proxy where we strip the cookies and send the request to third party provider without them?
Content loading from main domain
Is it possible to set document.domain to example.com and do XMLHttpRequest to the example.com?
Cross site scripting
I guess that it would be no problem because of the same origin policy. Subdomain is treated as separate domain?
Any other security issues?
We have cookies .example.com, so there will be sent also in the
requests to subdomain. Is it possible for us to point A record to
reverse proxy where we strip the cookies and send the request to third
party provider without them?
Great idea, you could do this yes, however you will also need to set the HttpOnly flag, otherwise they would be able to retrieve them with JavaScript.
Is it possible to set document.domain to example.com and do
XMLHttpRequest to the example.com?
No, subdomains for Ajax are treated as a different Origin. See this answer.
I guess that it would be no problem because of the same origin policy.
Subdomain is treated as separate domain?
JavaScript code could interact with each other subdomains - but only with the cooperation of your site. You would also need to also set document.domain = 'example.com'; If you do not do this, you are secure against this threat.
See here:
When using document.domain to allow a subdomain to access its parent
securely, you need to set document.domain to the same value in both
the parent domain and the subdomain. This is necessary even if doing
so is simply setting the parent domain back to its original value.
Failure to do this may result in permission errors.
Any other security issues?
You need to be aware of cookie poisoning. If evil.example.com sets a non host-only cookie at .example.com that your domain believes it has set itself, then the evil cookie may be used for your site.
For example, if you display the contents of the cookie as HTML, then this may introduce XSS. Also, if you're using the double submit cookies CSRF prevention method an evil domain may be able to set their own cookie value to achieve CSRF. See this answer.

Consuming a webservice with jsessionid in URL

I`m working on a SAP project, where i have to call a non-sap service with jsessionid in binding url. I already generated a proxy class out of the wsdl and defined a logical port with my URL. In my case it should be dynamic like: {host}/service/foo/binding;jsessionid={xxx} but its static like: {host}/service/foo/binding
How can i achieve that session handling?
EDIT: The problem here is, its not only for authentification its also for load balancing. The jsessionid MUST be submitted via URL rewriting. Any ideas?
You should be able to configure this with the soamanager transaction:
Go to the service configuration screen and select your consumer proxy
Edit the existing, or create a new logical port
Go to the transport settings tab and change the URL access path
Once saved, you can find the logical port as a destination in transaction SM59. It's one of the generated ones in the external HTTP connections tree.
Providing a value for the parameter will probably require a modification of the SAP software though. The system uses the cl_http_client=>create_by_destination method to obtain a client object to perform the http call, so maybe you can implement some custom code there.