Custom Receiver and Cookies from Akamai - cookies

I'm experiencing problem with HLS from Akamai.
I'm using HLS from Akamai with tokens, in order to start stream, Cookies should be set after master m3u8 playlist response.
Response from Akamai:
Access-Control-Allow-Credentials:true
Access-Control-Allow-Origin:*
Access-Control-Expose-Headers:Content-Type
Cache-Control:max-age=0, no-cache, no-store
Connection:keep-alive
Content-Length:818
Content-Type:application/vnd.apple.mpegurl
Date:Wed, 17 Sep 2014 12:15:54 GMT
Expires:Wed, 17 Sep 2014 12:15:54 GMT
Mime-Version:1.0
Pragma:no-cache
Server:AkamaiGHost
Set-Cookie:_alid_=/cropped/
Set-Cookie:hdntl=/cropped/
I'm rewriting Host.updateManifestRequestInfo in both scenarios there is a dead end.
If you use requestInfo.withCredentials = true;, response will be:
XMLHttpRequest cannot load /*MEDIA_URL*/. A wildcard '*' cannot be used in the 'Access-Control-Allow-Origin' header when the credentials flag is true. Origin '/*PLAYERS_HOST*/' is therefore not allowed access.
With requestInfo.withCredentials = false; will be 403 response.
What is correct way in implementation of Custom Player for HLS from Akamai with tokens?

I know it is a very old post, but in case somebody gets the same problem in the future: If you set withCredentials to true, then the Akamai response has to explicitly authorise your ChromeCast receiver through their CORS header.
Please ask Akamai to add the domain of your receiver to their CORS and the issue will be resolved. Please have a look at this page for more info: https://developers.google.com/cast/docs/player.

You need to write a custom receiver (using MPL) and use host to override updateManifestRequestInfo / updateSegmentRequestInfo to achieve the desired behavior.

Related

AWS S3 PUT Example using REST API

The AWS S3 PUT REST API docs are lacking a clear example of the Authorization string in the Request Syntax.
Request Syntax
PUT /Key+ HTTP/1.1
Host: Bucket.s3.amazonaws.com
x-amz-acl: ACL
Cache-Control: CacheControl
Content-Disposition: ContentDisposition
Content-Encoding: ContentEncoding
Content-Language: ContentLanguage
Content-Length: ContentLength
Content-MD5: ContentMD5
Content-Type: ContentType
Expires: Expires
x-amz-grant-full-control: GrantFullControl
x-amz-grant-read: GrantRead
x-amz-grant-read-acp: GrantReadACP
x-amz-grant-write-acp: GrantWriteACP
x-amz-server-side-encryption: ServerSideEncryption
x-amz-storage-class: StorageClass
x-amz-website-redirect-location: WebsiteRedirectLocation
x-amz-server-side-encryption-customer-algorithm: SSECustomerAlgorithm
x-amz-server-side-encryption-customer-key: SSECustomerKey
x-amz-server-side-encryption-customer-key-MD5: SSECustomerKeyMD5
x-amz-server-side-encryption-aws-kms-key-id: SSEKMSKeyId
x-amz-server-side-encryption-context: SSEKMSEncryptionContext
x-amz-request-payer: RequestPayer
x-amz-tagging: Tagging
x-amz-object-lock-mode: ObjectLockMode
x-amz-object-lock-retain-until-date: ObjectLockRetainUntilDate
x-amz-object-lock-legal-hold: ObjectLockLegalHoldStatus
Body
The docs show this request example further on...
PUT /my-image.jpg HTTP/1.1
Host: myBucket.s3.<Region>.amazonaws.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: authorization string
Content-Type: text/plain
Content-Length: 11434
x-amz-meta-author: Janet
Expect: 100-continue
[11434 bytes of object data]
But again, the doc does not have an example format for Auth String. I tried AccessKeyID Secret but that didn't work. I dont' even see logical parameters in the request syntax to pass the two parts of the credential (AccessKeyID and Secret) anywhere in the examples!
Does anyone have a simple example of how to use PUT to add a .json file to S3 using the REST API? Preferrably a screenshot of PostMan setup to better explain where values go (in URL vs. as headers).
From the AWS docs here, it appears it is not possible to create a PUT request to an S3 bucket using REST API alone:
For authenticated requests, unless you are using the AWS SDKs, you have to write code to calculate signatures that provide authentication information in your requests.
This is a new concept to me. I've used token requests and sending keys in headers before when authenticating via REST API's. It sounds like a more secure method of auth.

JMETER This site does not specify a policy in the P3P header ERROR

I am trying to hit this URL https://subdomain.example.com in JMeter and recorded using the Blazemeter Chrome extension has all the necessary config elements but get an error:
HTTP/1.1 429 Too Many Requests
Content-Type: text/html; charset=utf-8
Content-Length: 1031
Connection: keep-alive
Cache-Control: private, no-cache, no-store, must-revalidate
Date: Tue, 20 Aug 2019 01:21:35 GMT
Expires: 0
p3p: CP="This site does not specify a policy in the P3P header"
I have tried coping the Header Cookies from Browser Header Response which works for sometime but then start throwing an error
As per HTTP Status Code 429 Too Many Requests description:
The HTTP 429 Too Many Requests response status code indicates the user has sent too many requests in a given amount of time ("rate limiting").
A Retry-After header might be included to this response indicating how long to wait before making a new request.
So there are following options:
Your server is overloaded, in this case there is nothing you can do here apart from reporting the error as the bottleneck
Your script doesn't have proper correlation implemented, i.e. you're sending recorded hard-coded values instead of getting dynamic parameters
Your server doesn't allow such amount of requests from a single IP address within the given timeframe, you could try implementing IP Spoofing so your server would "think" that the requests are coming from the different machines.
Thanks for your reply. In the end I figured out that no limitation for number of calls implemented.
Now come to answer this is how I managed to work this:
Opened the page in chrome and from the header section copied all the header elements into the header manager hard coded.
First time it fails and returns p3p: CP="This site does not specify a policy in the P3P header" but also return the update variable value needed for next request which I extract and used in the next and subsequent Requests. The way I was able to find out which variable is changing by using the string comparison of 2 Response Headers
This was a difficult one but somehow worked with very minor change I also added the Header Manager to each request for safer side.

Access Denied from Cloudfront with Secure Cookies returns no CORS headers preventing reading error information from a XHR request

I'm using cloudfront secure cookies to keep some files private. When cookie auth succeeds and the origin is hit cloudfront returns the proper cors headers (Access-Control-Allow-Origin) from the origin but how do I make cloudfront return CORS headers during a 403/Access Denied? This validation is entirely in cloudfront before the request to the origin, but is there a setting to enable it? I want to be able to make a XHR request to cloudfront and know why the request failed. Since cloudfront doesn't return cors headers on a 403 most modern browsers will prevent reading any information on the request including the status code and its tough to determine why the request failed.
Thanks!
As you know, CloudFront doesn't spontaneously emit CORS headers -- they need to come from the origin server -- so in order to see CORS headers in the response, the request needs to be allowed by CloudFront... but, of course, it can't be allowed, because the condition you're trying to catch is 403 Forbidden.
So, what we need in order to allow your unauthorized responses to be CORS-friendly is an additional origin that can provide us with an alternate error response, and that origin needs to be CORS-aware. The solution seems to be something we can accomplish with a little help from CloudFront Custom Error Responses and an otherwise-empty S3 bucket, created for the purpose.
Custom error responses allow you to configure CloudFront to fetch the custom error response from another origin, rather than generating it internally. As part of that process, some headers from the original request are included in the upstream fetch, and the response headers from the error document are returned.
S3 makes a handy origin, since it has configurable CORS support.
Create a new, empty bucket.
Enable CORS for the bucket, and configure CORS with the appropriate parameters. The default configuration may be fine for this purpose.
Create a simple file that your CloudFront distribution will be using instead of its built in response for a 403. For test purposes, that can just be a text file that says "Access denied."
Upload the file to the bucket with whatever name you like, such as 403.txt. Select the option to make the object publicly-readable. Set metadata Cache-Control: no-cache, no-store, private, must-revalidate and Content-Type: text/plain (or text/html, depending on what exactly you put in the error file).
In CloudFront, create a new Origin. For the Origin Domain Name, select the bucket from the list of buckets.
Create a new Cache Behavior, matching path /403.txt (or whatever you named the file). Whitelist the Origin, Access-Control-Request-Headers, and Access-Control-Request-Method headers for forwarding. Set Restrict Viewer Access to No, because for this one path, we don't require signed credentials. Note that this path needs to be exactly the same as the filename in the bucket (except the leading slash, which isn't shown in the bucket but should be included, here).
In CloudFront Custom Error Responses, choose Create Custom Error Response. Select error code 403, set Error Caching Minimum TTL to 0, choose Customize Error Response Yes, set Response Page Path /403.txt and set HTTP Response code to 403.
Profit!
Test:
$ curl -v dzczcnnnnexample.cloudfront.net -H 'Origin: http://example.com'
* Rebuilt URL to: dzczcnnnnexample.cloudfront.net/
* Trying 203.0.113.208...
* Connected to dzczcnnnnexample.cloudfront.net (203.0.113.208) port 80 (#0)
> GET / HTTP/1.1
> Host: dzczcnnnnexample.cloudfront.net
> User-Agent: curl/7.47.0
> Accept: */*
> Origin: http://example.com
>
< HTTP/1.1 403 Forbidden
< Content-Type: text/plain
< Content-Length: 16
< Connection: keep-alive
< Date: Sun, 08 Apr 2018 14:01:25 GMT
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET, HEAD
< Access-Control-Max-Age: 3000
< Last-Modified: Sun, 08 Apr 2018 13:29:19 GMT
< ETag: "fd9e8f7be7b65381c4acc272b6afc858"
< x-amz-server-side-encryption: AES256
< Cache-Control: private, no-cache, no-store, must-revalidate
< Accept-Ranges: bytes
< Server: AmazonS3
< Vary: Origin,Access-Control-Request-Headers,Access-Control-Request-Method
< X-Cache: Error from cloudfront
< Via: 1.1 1234567890a26beddeac6bfc77b2d348.cloudfront.net (CloudFront)
< X-Amz-Cf-Id: ExAmPlEIbQBtaqExamPLEQs4VwcxhvtU1YXBi47uUzUgami0Hj0MwQ==
<
Access denied.
* Connection #0 to host dzczcnnnnexample.cloudfront.net left intact
Here, Access denied. is what I put in the text file I created. You may want to get a little more creative, after confirming that this works for you, as it does for me. The content of this new file in S3 will always be returned whenever CloudFront throws a 403 error. Additionally, it will also be returned whenever your origin throws a 403, because custom error responses are designed to replace all errors with a given HTTP status code.
You note, above, that we see Access-Control-Allow-Origin: *. This is the default behavior of S3 CORS. If you provide explicit origins in the S3 CORS config, you get a response like this...
Access-Control-Allow-Origin: http://example.com
...but for GET requests, I assume this level of specificity would not be necessary and the wildcard would suffice. The scenario described here isn't setting CORS for the entire CloudFront distribution -- just for the error response.

Validation of Akamai caching

We have cached html and png pages in Akamai by changing the waa config. But unable to validate it through fiddler, live http headers or through curl commands. Below are the screenshots. Please help if I missed any headers
Fiddler:
live http header
Curl command :
$ curl -H "Pragma: akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-check-cacheable, akamai-x-get-cache-key, akamai-x-get-extracted-values, akamai-x-get-nonces, akamai-x-get-ssl-client-session-id, akamai-x-get-true-cache-key, akamai-x-serial-no" -IXGET "url"
Response :
HTTP/1.1 200 OK
Date: Thu, 20 Oct 2016 19:15:53 GMT
Content-Length: 19836
Content-Type: image/png
X-FRAME-OPTIONS: DENY
Well in Akamai WAF(Web Application Firewall) there are few WAF rules that prevent display of pragma headers to users. You need to create an exception for those WAF rules and add only trusted IP's to it. Then you will be able to see the information that you are looking for. Thanks, Vinod
I would double check that you are definitely sending that Pragma header with the request and that it is also properly formed. I've seen a lot of problems with people trying to set this up and not getting it right.
Also it's worth reviewing your Akamai configuration because it is also possible to switch this off - some clients prefer to do this for security reasons.

set-cookie header not working

I'm developing a small site w/ Go and I'm trying to set a cookie from my server.
I'm running the server on localhost, with 127.0.0.1 aliased to subdomain-dev.domain.com on port 5080.
My When I receive the response for my POST to subdomain-dev.domain.com:5080/login I can see the set-cookie header. The response looks like this:
HTTP/1.1 307 Temporary Redirect
Location: /
Set-Cookie: myappcookie=encryptedvalue==; Path=/; Expires=Fri, 13 Sep 2013 21:12:12 UTC; Max-Age=900; HttpOnly; Secure
Content-Type: text/plain; charset=utf-8
Content-Length: 0
Date: Fri, 13 Sep 2013 20:57:12 GMT
Why isn't Chrome or Firefox recording this? In Chrome it doesn't show up in the Resources tab. In FF I can't see it either. And in neither do I see it in future Request headers.
See that Secure string in the cookie?
Yeah, me too. But only after a few hours.
Make sure you're accessing your site by SSL (https:// at the beginning of the URL) if you've got the Secure flag set.
If you're developing locally and don't have a cert, make sure you skip that option.
In my case, I had to add this to my response:
access-control-expose-headers: Set-Cookie
I found here that my Set-Cookie header was not accessible to my client unless I added it to the exposed-header header.
Hope this can help someone!
Found related github issue response cookies not being sent that helped.
In my case I am running react app under https (with mkcert tool) and making cross origin fetch request and get response. Cookies of the response is not set until I
specify credentials: 'include' for fetch request
example fetch api
fetch('https://example.com', {
credentials: 'include'
});
Specify these response headers from server
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: https://localhost:3000
Access-Control-Allow-Origin header has value of the url of my react app.
add these attributes of Set-Cookie Header Path=/; HttpOnly; Secure; SameSite=None using http cookies
Hope it helps someone!