How to test viewing restricted contents in AWS using signed cookies? - amazon-web-services

I have created a cloudfront distribution and configured with restrict viewer access. So i'm just looking for a way to view the contents within the distribution by assigning the cookies. I've manage to generate the below signed cookies.
{
"CloudFront-Policy":
"CloudFront-Key-Pair-Id":
"CloudFront-Signature":
}
Can we just call the cloudfront destination(https://d1fzlamzw9yswb.cloudfront.net/17-Sep-201%3A05%3A48_1.jpg) in browser and test it whether it works by assigning cookies from browser? What is the best way to test whether the signed cookies are working or not?

I assume that you already created a signed cookie.
You can use curl to send cookies to your CloudFront distribution. This is one way of testing if your setup works correctly. Here is how to pass a cookie with your request:
curl -b 'session=session_id' https://yourdistribution.cloudfront.net
The full header that curl sets for this request looks like this Cookie: session=session_id
UPDATE:
Concretely, CloudFront will expect the following cookie set:
curl -v -X GET --output test-file -b "CloudFront-Expires=4762454400" -b "CloudFront-Signature=k9CxLm0HL5v6bx..." -b "CloudFront-Key-Pair-Id=APKA..." "https://yourdistribution.cloudfrontnet/test-file"
Alternatively, we can also use --cookie flag, like this:
curl -v -X GET --output test-file --cookie "CloudFront-Expires=4762454400;CloudFront-Signature=k9CxLm0HL5v6bx...;CloudFront-Key-Pair-Id=APKA..." "https://yourdistribution.cloudfrontnet/test-file"
Best, Stefan

Related

Why is Basic Authentication failing with Postman CLI?

I am trying to automate via the Postman CLI my collections.
I am able to run a folder (with the Postman Runner) without problems, using Basic Authentication to access many endpoints I am calling.
If I try to run the very same folder with the Postman CLI, all the protected endpoints answer with 403 Forbidden.
It seems that the requests are not using the authentication header.
Is it a known problem? Is there a workaround?
Plus, to troubleshoot better, is there a way to inspect the requests when the collection is run with the Postman CLI? I can see a recap but I cannot see the detailed requests with all the headers, body, ect...
I am running the collection/folder with
postman collection run COLLECTION_UUID -k --verbose -e ENVIRONMENT_UUID -i FOLDER_UUID --env-var "source=X.X.X.X" -d "datafile.json"

call AWS Elasticsearch Service API with cURL --aws-sigv4

when I execute
curl --request GET "https://${ES_DOMAIN_ENDPOINT}/my_index_pattern-*/my_type/_mapping" \
--user $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY \
--aws-sigv4 "aws:amz:ap-southeast-2:es"
where $ES_DOMAIN_ENDPOINT is my AWS Elasticsearch endpoint, I'm getting the following response:
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}
I'm confident that my $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY are correct.
However, when I send the same postman request with the AWS Authentication and the parameters above, the response is coming through. I compared the verbose output of both requests and they have very minor differences, such as timestamps and signature.
I'm wondering, what is wrong with the --aws-sigv4 config?
This issue happens due to the* character in the path. There is a bug report in curl repository to fix this issue https://github.com/curl/curl/issues/7559.
Meanwhile, to mitigate the error you should either remove a * from the path or build curl from the branch https://github.com/outscale-mgo/curl-appimage/tree/http_aws_sigv4_encoding.

Pre-signed url for the whole bucket

I am using gsutil to create pre-signed URLs for upload. When naming the object in the bucket, I can successfully upload. The following snipit from gsutil works fine with a curl PUT:
gsutil signurl -m PUT -d 10m -r eu ~/.ssh/mycreds.json gs://cjreyn-bucket-0/myobjectname.txt
However, when specifying just the bucket name, instead of an object within it, uploading an arbitrary object doesn't work:
gsutil signurl -m PUT -d 10m -r eu ~/.ssh/mycreds.json gs://cjreyn-bucket-0/
This returns the follwing from curl:
<?xml version='1.0' encoding='UTF-8'?><Error><Code>BucketAlreadyOwnedByYou</Code><Message>Your previous request to create the named bucket succeeded and you already own it.</Message></Error>
My curl line is as follows (signed URL replaced with for brevity):
curl -X PUT --upload-file myobj.txt "<mysignedurl>"
Is it even possible to create signed URLs for upload and download to/from the whole bucket, rather than for each object within it?
No, this isn't possible.
A signed URL authorizes exactly one specific request, which is transformed to its canonical form before signing. The service then assembles the same representation and calculates what the signature should have been, and at that point there's only one correct answer for what the valid signature can be for a given verb (e.g. PUT), resource (bucket + key), set of credentials, timestamp, and expiration.
The signature doesn't actually include a copy of the request parameters being authorized -- they're inferred from the request when it is made -- so there is no mechanism for passing any information to the service that could be used to specify e.g. that a wildcard path should be allowed.
Signing a PUT URL for the bucket itself authorizes the bearer of that URL to send a request to create a bucket with that name, as implied by the error you received.
One solution is to create a web service which responds to an authorized request (whatever that might mean in your environment, e.g. based on a user's web cookie) with a pre-signed URL for the parameters supplied in the request, once the web service validates them. Keep in mind that if this is a web site or an app that needs to be writing to the bucket, you can never trust those things to only make reasonable/safe requests, because they are outside your control -- so parameter validation would be critical to avoid malicious actions.
Another option might be awscurl which a curl-like tool that does its own request signing. GCS has an AWS-compatibility mode, so this might mean you can use this tool, as-is, or perhaps it can be adapted or perhaps there's a comparable tool for GCS. But I assume you need the signed URL elsewhere, not the local machine, otherwise you would just be using gsutil to do the upload.
It is impossible to do because a pre-signed URL is valid for one object only.

Passing cookies between requests in Postman runner

I'm writing a Postman collection to be executed in Postman Runner, which requires that cookies from a first request be used in subsequent requests.
In curl, you can achieve this like so
curl -d "username=x&password=y" -c logincookie.txt https://service.com/login
curl -b logincookie.txt https://service.com/x/profile
I can't seem to do this in Postman.
As documented, in my test for the first request I save the cookie as an environment variable
var login_cookie = postman.getResponseCookie("LOGIN");
postman.setEnvironmentVariable("login_cookie", login_cookie);
and then, as described in this blog post, I add the following header to the subsequent request,
Cookie: {{login_cookie}}
but the server responds to this request as if the cookie was not provided.
How can I pass the cookie from the first response to the second request?
I'm using the Postman for Mac 4.10.7 and have enabled the interceptor with its default settings, although I don't know how to validate that this actually works!

Send cookies with curl

I am using curl to retrieve cookies like so:
curl -c cookies.txt url
then I parse the cookie I want from the cookies.txt file and send the request again with the cookie
curl -b "name=value" url
Is this the correct way to send the cookie?
Is there a simpler way?
You can use -b to specify a cookie file to read the cookies from as well.
In many situations using -c and -b to the same file is what you want:
curl -b cookies.txt -c cookies.txt http://example.com
Further
Using only -c will make curl start with no cookies but still parse and understand cookies and if redirects or multiple URLs are used, it will then use the received cookies within the single invoke before it writes them all to the output file in the end.
The -b option feeds a set of initial cookies into curl so that it knows about them at start, and it activates curl's cookie parser so that it'll parse and use incoming cookies as well.
See Also
The cookies chapter in the Everything curl book.
.example.com TRUE / FALSE 1560211200 MY_VARIABLE MY_VALUE
The cookies file format apparently consists of a line per cookie and each line consists of the following seven tab-delimited fields:
domain - The domain that created AND that can read the variable.
flag - A TRUE/FALSE value indicating if all machines within a given domain can access the variable. This value is set automatically by the browser, depending on the value you set for domain.
path - The path within the domain that the variable is valid for.
secure - A TRUE/FALSE value indicating if a secure connection with the domain is needed to access the variable.
expiration - The UNIX time that the variable will expire on. UNIX time is defined as the number of seconds since Jan 1, 1970 00:00:00 GMT.
name - The name of the variable.
value - The value of the variable.
From http://www.cookiecentral.com/faq/#3.5
if you have Firebug installed on Firefox, just open the url. In the network panel, right-click and select Copy as cURL. You can see all curl parameters for this web call.
Very annoying, no cookie file exmpale on the official website https://ec.haxx.se/http/http-cookies.
Finnaly, I find it does not work, if your file content is just copyied like this
foo1=bar;foo2=bar2
I gusess the format must looks the style said by #Agustí Sánchez
. You can test it by -c to create a cookie file on a website.
So try this way, it works
curl -H "Cookie:`cat ./my.cookie`" http://xxxx.com
You can just copy the cookie from chrome console network tab.