How do I tell Suave not to cache a url? - suave

I have 2 paths getting cached
Auth.loggedOn (
GET >=>
pathScan "/era/%i" (Some >> EraViewing.eraView homeFolder cn)
path "/" >=> indexView homeFolder cn
There is a x.html file behind each of these which get served after replacing some templated parts.
Which are getting cached, but I do not want them to. how can I tell suave not ?
Not the same question, as that one is solved by making something a function rather than telling suave not to cache or when to recalcuate/reevaluate a url

To indicate the browser to not cache the response you can define the following combinator.
let noCache =
setHeader "Cache-Control" "no-cache, no-store, must-revalidate"
>=> setHeader "Pragma" "no-cache"
>=> setHeader "Expires" "0"
And use it like in the following
let app = OK "Hello" >=> noCache

Related

How to handle corrupted AWS ALB path due to browser caching?

I have an ALB with listener rules that check for path and headers, but after clearing cookie with accessToken after going to /path1, it makes it impossible to go back to that path, and automatically redirect to /home every time. For some reason, going to /path2 which is in the same set of rules as /path1, it would work, but if you clear the cookie once again, it becomes corrupted and loop you back to /home. Both /path1 and /path2 would be inaccessible to the user. I have the following listener rules:
Listener:
Rule 1:
when path = /login
when headers contain: cookie = *accessToken*
Redirect to path /home
Rule 2:
when path = /login
Redirect to path /home
Rule 3:
when path = /home
when headers contain: cookie = *accessToken*
Forward to lambda target group that checks access token and returns content type html if valid, else erase cookie and return to /login.
Rule 4:
when path = /home
Redirect to path /login
Rule 5:
when path = /path1 or /path2 or /path3
when headers contain: cookie = *accessToken*
Forward to lambda target group that checks access token and returns content type html if valid, else erase cookie and return to /login.
Rule 6:
when path = /path1 or /path2 or /path3
Forward to path /login
As far as I am aware, ALB doesn't have caching capabilities, so I thought the problem is with the browser caching that is done via html files or lambda responses so I've added to the response of lambda:
Lambda Target Group Response:
return {
statusCode: 200,
headers: {
'Content-Type': 'text/html',
'Cache-Control': 'no-cache, no-store, must-revalidate'
},
body: html
}
HTML files:
<head>
<meta charset="utf-8" />
<title>Website teste</title>
<meta http-equiv="cache-control" content="no-cache, no-store, must-revalidate">
</head>
The problem still occurs and still can't go back to the path. I tried adding to the html files headers and it still didn't work. What would be the other reasons why the path would get corrupted and unable to return to it without getting the redirect loop?

C++ Webkit GTK , How to disable cross origin policy?

I'm trying to load http://google.com in iframe with "file://" domained page. Ofcourse i got "Google.com did not allow" error.
I already tried reverse proxy but i think reverse proxy does it not make sense.
After then, i'm researched over a few hours about disable or bypass the "Cross origin policy" in webkit gtk.
I tried some solutions in this manual page, https://webkitgtk.org/reference/webkit2gtk/stable/WebKitSettings.html
so, i tried to add this block in WebKitSettings
WebKitSettings *settings =
webkit_web_view_get_settings(WEBKIT_WEB_VIEW(webview));
webkit_settings_set_allow_file_access_from_file_urls(settings, true);
webkit_settings_set_allow_file_access_from_file_urls(settings,true);
but it does not work. I still can't connect to google.com (or any cors protected website) in iframe.
According to my last research, Webkit GTK manual there is a few little trick about this.
It is mentioned as property
(allow-file-access-from-file-urls)
but i can't figure it out how to implement my code.
Editing:
i add this line in my code
webkit_settings_set_allow_universal_access_from_file_urls(settings,true);
now i also got "Connection refused in a frame because it set X-Frame-Options to SAMEORIGIN" error.
How can i set it in webkitgtk for cross origin ?
As already pointed out in the comments, CORS policy can't be bypassed.
You won't be able to load in an iframe any site that is properly configured to prevent that.
The only way to get around this would be to make a server-side request from a website you own to the website you'd like to display, have your site configured with the correct X-Frame-Options and make it return what it fetched from the site that should be displayed.
A sort of proxy, that still is hugely error-prone.
I made a quick proof of concept in PHP.
At https://lucasreta.com/test/google.com we have the following script, which retrieves the contents of google.com, parses and displays them:
<?php
$url = "https://www.google.com";
$request = curl_init();
curl_setopt_array($request, [
CURLOPT_URL => $url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTPHEADER => [
"Content-Type: text/html; charset=UTF-8"
]
]);
$return = curl_exec($request);
curl_close($request);
header('Content-Type: text/html; charset=UTF-8');
# after setting headers, we echo the response
# obtained from the site to display and tweak it
# a bit in order to fix local urls
echo str_replace("=\"/", "=\"$url", $return);
And at https://lucasreta.com/test/google.com/iframe we include what is returned above in our iframe:
<style>
* {font-family: sans-serif}
iframe {width: 90%; height: 85vh;}
</style>
<h1>Google in an iframe</h1>
<iframe src="../"></iframe>

Why does my Access-Control-Allow-Origin header have an incorrect value?

I have a webpage on the domain https://imeet.pgi.com making an XMLHttpRequest to another domain. The request fails with the following console error (using Chrome browser):
XMLHttpRequest cannot load
https://pgidrive.com/eloqua/forminator/getForm.php. The
'Access-Control-Allow-Origin' header has a value
'https://imeet.pgi.coms' that is not equal to the supplied origin.
Origin 'https://imeet.pgi.com' is therefore not allowed access.
Note that the Access-Control-Allow Origin header has a value of:
https://imeet.pgi.coms with an "s" on the end.
Why does my Access-Control-Allow-Origin header have this incorrect value?
If it could be a typo somewhere, where would I look to check?
More background info: I have made this same request successfully from other domains with no issue. I have set a list of allowed origin domains that includes imeet.pgi.com on the .htaccess file on pgidrive.com.
Also, the code for the allowed origin domains in my .htaccess:
<IfModule mod_headers.c>
SetEnvIf Origin "http(s)?://(www\.)?(agenday.com|imeet.pgi.com|pgi.com|go.pgi.com|staging.pgi.com|imeetlive.pgi.com|globalmeet.pgi.com|latam.pgi.com|br.pgi.com|pgi.ca)$" AccessControlAllowOrigin=$0$1
Header add Access-Control-Allow-Origin %{AccessControlAllowOrigin}e env=AccessControlAllowOrigin
Header set Access-Control-Allow-Credentials true
</IfModule>
In your htaccess file, when doing the following:
SetEnvIf Origin "http(s)?://(www\.)?(agenday.com|imeet.pgi.com|pgi.com|go.pgi.com|staging.pgi.com|imeetlive.pgi.com|globalmeet.pgi.com|latam.pgi.com|br.pgi.com|pgi.ca)$" AccessControlAllowOrigin=$0$1
you have AccessControlAllowOrigin=$0$1. Here, $0 means the whole matched string, and $1 means the first matched group. The first matched group here is (s)?.
When you make a request using the origin: https://imeet.pgi.com, the pattern is parsed and grouped as follows:
$0 = `https://imeet.pgi.com`
$1 = `s`
$3 = `imeet.pgi.com`
which is why you see the s character.
Change that to (basically, remove the $1):
SetEnvIf Origin "https?://(?:(?:agenday|(?:(?:imeet|go|staging|imeetlive|globalmeet|latam|br)\.)?pgi)\.com|pgi\.ca)$" AccessControlAllowOrigin=$0

Set domain cookie in HTTPoison (Elixir)

Ok, so my new problem in Elixir is that I can't set the explicit domain while creating cookies.
In this case:
HTTPoison.get("httpbin.org/cookies", [{"User-agent", #userAgent}], hackney: [
cookie: "cookie1=1 cookie2=2"] ) do
When I create a cookie it will store a domain like .httpbin.org but for dummy reason I need to set domain value like httpbin.org (without previous dot) .
I tried also with:
HTTPoison.get("httpbin.org/cookies", [{"User-agent", #userAgent}], hackney: [
cookie: "cookie1=1 domain=httpbin.org cookie2=2"] ) do
But of course the syntax expects domain as a cookie name and httpbin.org as a cookie value.
Thank you!
What's the reason you want to remove the dot in the beginning? It's optional and it should match the entire domain with/without the dot.
How do browser cookie domains work?
Also, I think the domain attribute would be for the Set-Cookie header returned from HTTP server rather than requesting from the client. The httpbin (https://httpbin.org/cookies/set) returns the Set-Cookie header, but it doesn't specify domain attribute (just Path=/). It would be taken as .httpbin.org by clients like browsers.
iex(25)> response = HTTPoison.get!("https://httpbin.org/cookies/set?k2=v2&k1=v1")
%HTTPoison.Response{body: "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>Redirecting...</title>\n<h1>Redirecting...</h1>\n<p>You should be redirected automatically to target URL: /cookies. If not click the link.",
headers: [{"Server", "nginx"}, {"Date", "Fri, 18 Dec 2015 23:49:46 GMT"},
{"Content-Type", "text/html; charset=utf-8"}, {"Content-Length", "223"},
{"Connection", "keep-alive"}, {"Location", "/cookies"},
{"Set-Cookie", "k2=v2; Path=/"}, {"Set-Cookie", "k1=v1; Path=/"},
{"Access-Control-Allow-Origin", "*"},
{"Access-Control-Allow-Credentials", "true"}], status_code: 302}
iex(26)> :hackney.cookies(response.headers)
[{"k1", [{"k1", "v1"}, {"Path", "/"}]}, {"k2", [{"k2", "v2"}, {"Path", "/"}]}]
Sorry if I'm missing the point.

How can I set S3 putObject options when using signed URL's to upload files from the client

I am using signed urls to upload files directly from the client straight into my S3 bucket. To do this I perform a direct put request to upload the file itself after having creating a signed URL of the command I want to perform.
I create the signed url like so:
$command = $s3->getCommand('PutObject', array(
'Bucket' => $this->_bucket,
'Key' => $key,
'ACL' => 'public-read',
'CacheControl' => 'max-age=0',
'ContentEncoding' => 'gzip',
'ContentType' => $filetype,
'Body' => '',
'ContentMD5' => false
));
$signedUrl = $command->createPresignedUrl('+6 hours');
However, after then performing the put request and uploading the file itself the Cache-Control and Content-Encoding headers are not set.
Does anyone have any idea where I am going wrong?
The headers still have to be set in the PUT request. Including them in the signed url isn't sufficient.
The pre-signed URL only serves to ensure that the actual request parameters match the authorized request parameters (otherwise the request fails).
So, if what I am saying is correct, then if these parameters aren't being sent with the request, it should fail, right? Almost.
Unfortunately, V2 authentication does not validate all request headers, such as Content-Encoding for example:
Note how only the Content-Type and Content-MD5 HTTP entity headers appear in the StringToSign. The other Content-* entity headers do not.
— http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
The same is true for Cache-Control. Only x-amz-* headers are subject to validation against the signature provided in V2 (which uses &Signature= in the Query string).
V4 auth (which, by contrast, uses &X-Amz-Signature= in the query string) contains a mechanism allowing you to specify which headers need validation against the signature, but in either case, you have to send the headers with the actual request itself, not just include them in the signature. It appears that you aren't, and that's why they are not set.