Specifying headers in Chromecast Receiver - cookies

The API that I'm using to retrieve media payload from requires an authentication header to be specified. Right now, I'm simply setting the cookies of the documents then getting the content because this seems to be the easiest way.
I ran into issues with cookies since cookies are domain specific and I'm hosting the receiver file in Google Drive instead of the domain that points to the API endpoints, I needed to set up proxy servers which would overwrite the cookies midway before the requests reach the endpoints.
This is an extremely complicated way of authentication. Does anyone know how I can specify header in the receiver file?

Related

How to store JWT token response from dynamo

I have stored my JWT token in dynamodb, from a step function used to generate it.
I have fetched the token using api gateway to my static site hosted in s3.
Does anyone know how to save it in cookie?
I am using the serverless framework to deploy my lambdas, if that is any help.
Thanks
To save this (or anything else, really) in a cookie, you need to either respond to a browser request with Set-Cookie HTTP header, see:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie
or to set it with client-side JavaScript, see:
https://developer.mozilla.org/en-US/docs/Web/API/Document/cookie
How you do it exactly depends on:
do you want to set it with HTTP headers or client-side JavaScript?
what framework do you use?
how your application is accessed?
how do you want to use that cookie?
Remember that cookie is client-side state and all you tell us about is how you get the data that you want to be stored in a cookie (which doesn't matter) and not how the client-server interaction is done (which is what matters here).

Security concern in direct browser uploads to S3

The main security concern in direct js browser uploads to S3 is that users will store their S3 credentials on the client side.
To mitigate this risk, the S3 documentation recommends using a short lived keys generated by an intermediate server:
A file is selected for upload by the user in their web browser.
The user’s browser makes a request to your server, which produces a temporary signature with which to sign the upload request.
The temporary signed request is returned to the browser in JSON format.
The browser then uploads the file directly to Amazon S3 using the signed request supplied by your server.
The problem with this flow is that I don't see how it helps in the case of public uploads.
Suppose my upload page is publicly available. That means the server API endpoint that generates the short lived key needs to be public as well. A malicious user could then just find the address of the api endpoint and hit it everytime they want to upload something. The server has no way of knowing if the request came from a real user on the upload page or from any other place.
Yeah, I could check the domain on the request coming in to the api, and validate it, but domain can be easily spoofed (when the request is not coming from a browser client).
Is this whole thing even a concern ? The main risk is someone abusing my S3 account and uploading stuff to it. Are there other concerns that I need to know about ? Can this be mitigated somehow?
Suppose my upload page is publicly available. That means the server
API endpoint that generates the short lived key needs to be public as
well. A malicious user could then just find the address of the api
endpoint and hit it everytime they want to upload something. The
server has no way of knowing if the request came from a real user on
the upload page or from any other place.
If that concerns you, you would require your users to login to your website somehow, and serve the API endpoint behind the same server-side authentication service that handles your login process. Then only authenticated users would be able to upload files.
You might also want to look into S3 pre-signed URLs.

Improvements on cookie based session management

"Instead of using cookies for authorization, server operators might
wish to consider entangling designation and authorization by treating
URLs as capabilities. Instead of storing secrets in cookies, this
approach stores secrets in URLs, requiring the remote entity to
supply the secret itself. Although this approach is not a panacea,
judicious application of these principles can lead to more robust
security." A. Barth
https://www.rfc-editor.org/rfc/rfc6265
What is meant by storing secrets in URLs? How would this be done in practice?
One technique that I believe fits this description is requiring clients to request URLs that are signed with HMAC. Amazon Web Services offers this technique for some operations, and I have seen it implemented in internal APIs of web companies as well. It would be possible to sign URLs server side with this or a similar technique and deliver them securely to the client (over HTTPS) embedded in HTML or in responses to XMLHttpRequests against an API.
As an alternative to session cookies, I'm not sure what advantage such a technique would offer. However, in some situations, it is convenient or often the best way to solve a problem. For example, I've used similar techniques when:
Cross Domain
You need to give the browser access to a URL that is on another domain, so cookies are not useful, and you have the capability to sign a URL server side to give access, either on a redirect or with a long enough expiration that the browser has time to load the URL.
Examples: Downloading files from S3. Progressive playback of video from CloudFront.
Closed Source Limitations
You can't control what the browser or other client is sending, aside from the URL, because you are working with a closed source plugin of some kind and can't change its behavior. Again you sign the URL server side so that all the client has to do is GET the URL.
Examples: Loading video captioning and/or sprite files via WEBVTT, into a closed-source Flash video player. Sending a payload along with a federated single sign-on callback URL, when you need to ensure that the payload can't be changed in transit.
Credential-less Task Worker
You are sending a URL to something other than a browser, and that something needs to access the resource at that URL, and on top of that you don't want to give it actual credentials.
Example: You are running a queue consumer or task-based worker daemon or maybe an AWS Lambda function, which needs to download a file, process it, and send an email. Simply pre-sign all the URLs it will use, with a reasonable expiration, so that it can perform all the requests it needs to without any additional credentials.

Why does Google Analytics/Mixpanel/etc. send cookies to my server?

Let's say I have a website at http://domain.org on which I also have Google Analytics (GA) and Mixpanel (MP) Javascript tracking codes. Both MP and GA store cookies on the user's browser for my entire domain including sub domains (.domain.org).
Because of this every time I do a request to any URL in this entire domain, the cookies for GA and MP are sent along.
New Relic on the other hand store their cookie on the bam.nr-data.net domain.
Why do GA and MP do this? Only in a situation of strange backend hackings one would use the values in these cookies on the backend.
Google Analytics uses first party cookies (by injecting the javascript code that's setting the cookie into the web site source) because 3rd party cookies are often blocked (in many browsers they are blocked by default).
Since they are set under your domain name they are sent to your server. That's pretty much a side effect of the original intent (which is to make the tracking more reliable), your are not supposed to use them in your backend.
When setting up Google Analytics you can use the cookie domain parameter to limit ga cookies to a specific part of your domain/subdomain. You cannot, to the best of my knowledge, make GA use a third party cookie.
On a related note, if your goal is to avoid sending the cookie data with each request, you can tell Google Analytics to use localStorage (instead of cookies) to store the client ID.
Here's a thread on the HTML5Boilerplate repo discussing implementation and the pros and cons of such an approach.
The TL;DR is if you don't have to support IE7 and older, and if you're not tracking across multiple sub-domains, you can use localStorage with no problems.

CSRF prevention in browser plugin-only API that is available on HTTPS

I have to design an API that will support browser plugins only (latest version of Chrome, Firefox, IE). The API will be served over HTTPS. The API will be using a cookie-based access control scheme.
I am wondering what tactics to employ for CSRF prevention. Specifically, I want my API only to get requests from my own browser plugin itself and not from any other pages/plugins.
Would I be able to:
Assume that in most cases there would be an Origin header?
Would I be able to compare and trust the Origin header to ensure that the requests only come from a white-listed set of Origins?
Would this be compatible across the board (Chrome/Firefox/IE)?
I'm aware of multiple techniques used to prevent CSRF, such as the Synchronizer Token Pattern, but I would like to know in my limited scope above, if simply checking the Origin header would be sufficient?
Thanks in advance!
Set a custom header like X-From-My-Plugin: yes. The server should require its presence. It can be a constant. A web attacker can either:
make the request from the user's browser: this sends the cookie, but they can't send the custom header cross-origin; or
make the request from a different HTTP client: they can send the custom header, but they can't send the cookie because they don't know it
Either way, the attacker's request won't have both the cookie and the custom header.