I would like Akamai not to cache certain URLs if the origin server sends a specific header. Is this possible to do with Akamai?
The question has been covered pretty well here: Bypass specific URL from Akamai if certain cookie exist
I would be surprised if there is no build in way to do this. In many cases, it is too difficult to configure these rules in Akamai. Only the origin server knows when a page cannot be cached.
This is definitely possible using Akamai Property Manager configuration.
If you are using Property Manager, you can do this with the Property Manager API or in Luna.
The Property Manager API documentation is here:
PAPI
Related
Here is an image of the general idea I want to accomplish
I have a react application that is hosted as a Zendesk app via an iFrame from subdomain.zendesk.com, the iFrame fetches the content from Cloudfront / S3 (using S3 Origins) and displays it within the Zendesk UI.
I'm trying to secure it and want to restrict access to the content to a specific origin (subdomain.zendesk.com for example) so that if anyone was to view the Cloudfront distribution directly (by navigating to xxxx.cloudfront.net) it would reject the request.
How can this be achieved? I have tried using AWS WAF and creating a rule that looks at the request origin header and matches it against the subdomain url (example origin: subdomain.zendesk.com) but that doesn't work so I think i'm barking up the wrong tree using that.
I have also tried creating a custom origin request policy on the distributions behaviour but again that didn't yield any results.
Zendesk does offer signed url functionality where the initial request becomes a POST request to the server that contains a JWT as form data in the request payload, I read that it might be possible to use Lambda#edge to accomplish this, I tried to implement this but I have not had any luck so far.
Any tips, examples or outlines as to what I am misunderstanding about these services would be very much appreciated.
In order to get a better support from the community, share the specific use-cases in your question and share in detail what you tried and what are the errors.
There are various ways to achieve what you mentioned in the picture:
Create multiple CloudFront Distributions for each domain and they can have either same or unique origins as per the need
Instead of domain, redirect traffic using "paths" or "routes" for e.g.: same-domain.com/path1 same-domain.com/path2 etc
Use Lambda#Edge and redirect the traffic based on domains
you can't have redirection (Behaviours functionality of CloudFront) using multiple domains
Is it possible to block just part of a request using ModSecurity, Azure WAF or similar? For example could you block a cookie because it contains invalid characters while allowing the rest through
I'm trying to trace an issue where sometimes a cookie is lost
ModSecurity or the web server could possibly be used to drop cookies, the easiest way to troubleshoot will be to use an application proxy like BurpSuite and see what's going on with the cookie, often the browser is the one taking the decision to use or not the cookie.
If I set the Api Key restrictions to "None", the service works great. If I set the Http referrers to websites, it works as expected with certain websites. If I set the Http referrers to the Urls of Web API Servers, I get a "restricted" message. Does anyone know how to allow the Url of the Web API Server to make a successful call when restrictions are being used? I would think that api.somedomain.com would work.
Looks like it might not be possible. Wow, what a shame! Hopefully, there is an update or workaround for this.
How to set Google API key restriction - HTTP referrers
By the way, this doesn't work either. This is an example in their documentation.
():somedomain.com/
(*): .somedomain.com/
I have to write the full sub domain to all my website Urls.
Thanks in advance!
What I ended up doing is creating another Api Key for my Web API Server requests. Since this key isn't displayed in a website, I shouldn't have to lock it down.
We are using Akamai CDN as our load balancer and it also servers as a gatekeeper for requests.
We usually consume 3rd party services and in those cases whitelist their IP to be accessed in our servers. The service we are currently using cannot share IP since it is on cloud and keeps changing. They can either provide host name or Custom request header or a user agent.
I tried adding host entry but that did not work. Any idea how to add custom request header or user agent?
Depending on the product you're using to deliver with Akamai, the solution is to add a "Modify Outgoing Request Header" behavior.
This is what the "Add Behavior" tool looks like when you filter behaviors on "header".
By default, this allows you to specify a User-Agent header to go to your origin with the request. You can also specify a custom header name and value using that Behavior.
If you make it part of your Default Rule, then every single request that goes through that property will have the custom header.
I am wanting to expose a restful web service for posting and retrieving data, this may be consumed by mobile devices or a web site.
Now the actual creation of the service isn't a problem, what does seem to be a problem is communicating from a different domain.
I have made a simple example service deployed on the ASP.NET development server, which just exposes a simple POST action to send a request with JSON content. Then I have created a simple web page using jquery ajax to send some dummy data over, yet I believe I am getting stung with the same origin policy.
Is this a common thing, and how do you get around it? Some places have mentioned having a proxy on the domain that you always request a get to, but then you cannot use it in a restful manner...
So is this a common issue with a simple fix? As there seem to be plenty of restful services out there that allow 3rd parties to use their service...
How exactly are you "getting stung with the same origin policy"? From your description, I don't see how it could be relevant. If yourdomain.com/some-path/defined-request.json returns a certain JSON response, then it will return that response regardless of what is requesting the file, unless you have specifically defined required credentials that are not satisfied.
Here is an example of such a web service. It will return the same JSON object regardless of from where the request is made: http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&sensor=true
Unless I am misunderstanding you (in which case you should clarify your actual problem), the same origin policy doesn't really seem to apply here.
Update Re: Comment
"I make a simple HTML page and load it as file://myhtmlfilelocation/myhtmlfile.html and try to make an ajax request"
The cause of your problem is that you are using the file:// URL scheme, instead of the http:// protocol scheme. You can find information about this scheme in Section 3.10 of RFC 1738. Here is an excerpt:
The file URL scheme is used to designate files accessible on a particular host computer. This scheme, unlike most other URL schemes, does not designate a resource that is universally accessible over the Internet.
You should be able to resolve your issue by using the http:// scheme instead of the file:// scheme when you make your asynchronous HTTP request.