IVS Token Authorisation - amazon-web-services

I hope I explain a bit clearly with the problems I'm having.
I used this guide ( https://catalog.us-east-1.prod.workshops.aws/v2/workshops/022adf04-0ff9-49af-848f-993e42575540/en-US/playauth) to generate a playback token , and after reading and following this entire guide, I was able to successfully generate a token.
"statusCode": 200,
"body": "{\"token\":\"eyJhbGciOiJFUzM4NCIsInR5cCI6IkpXVCJ9.xxxxxxxxxxxxxxxxxxxxxxm4iOiJhcm46YXdzOml2czpldS13ZXN0LTE6MDgxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxOmFjY2Vzcy1jb250cm9sLWFsbG93LW9yaWdpbiI6Imh0dHBzOi8vd3d3LmZvb3R5LnRvIiwiaWF0IjoxNjQ0MzUyMjI2LCJleHAiOjE2NDY5NDQyMjZ9.EQ1tnLU5uQhxnkVjJvrOo_z1Jlf4w0yMuhgWtB8ZBf_NKgWJCcMmToKia8u1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"}",
"headers": {
"Access-Control-Allow-Origin": "https://www.xxxxxx.com"
}
}
I put this token behind the stream url and everything works as it should work;
https://247dfhj3e56u467.us-xxxx-1.playback.live-video.net/api/video/v1/us-east-1.08xxxxxx06.channel.GpxxxxxxxxxxwA.m3u8?token=eyJhbGciOiJFUzM4NCIsInR5cCI6IkpXVCJ9.xxxxxxxxxxxxxxxxxxxxxxm4iOiJhcm46YXdzOml2czpldS13ZXN0LTE6MDgxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxOmFjY2Vzcy1jb250cm9sLWFsbG93LW9yaWdpbiI6Imh0dHBzOi8vd3d3LmZvb3R5LnRvIiwiaWF0IjoxNjQ0MzUyMjI2LCJleHAiOjE2NDY5NDQyMjZ9.EQ1tnLU5uQhxnkVjJvrOo_z1Jlf4w0yMuhgWtB8ZBf_NKgWJCcMmToKia8u1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This token only works on my domain, and when I use it on my other domain I get a CORS error because it only works on the domain I specified in the lambda function.
So so far the token generator works...
But guess what, as soon as there are people who can take the stream link through the source code, say use this link in VLC or any other m3u8 player, even some hls/m3u8 browser extensions in chrome can play it effortlessly.
My question to you is as follows;
Am I using the given token correctly?
Is there perhaps a lambda function script (json) that no longer enables these playback options?
Or can I solve this in another way so that the stream can only be played on my domain and not on a VLC player or browser extension.
Hopefully someone has a solution for this, because in this way the function of the token generator is not really valuable.
Sincerely.

Your use case seems correct.
Unfortunately, it's difficult to revoke the JWT token once you created. There is a property of expire time. Default value is 2 days in index.js and you can set shorter in case.
If I'd do, I'd integrate with some other systems like Cognito or IP based security groups.

Related

Istio, flask, and jwts--how to handle jwts from browser? (cookie vs Authorization header etc)

Ok, feeling dumb. I've been following https://auth0.com/blog/securing-kubernetes-clusters-with-istio-and-auth0/ to secure a flask app through an ingressgateway.
Basically, my AuthorizationPolicy is being seen (whitelisted routes are working, other routes are denied (enforced denied, matched policy none)) but anything that requires a jwt (key: request.auth.claims[...] or even source: requestPrincipals: ["*"]) fails.
Adding Envoy debug information, I don't see any Authorization header, which may be part of the problem; flask of course stores the access token as part of its session cookie (as in the linked article). And it seems as you'd need something like that for Istio to see it in the request; I tried setting an access_token cookie directly and using an EnvoyFilter to try and break it out into an Authorization header but that didn't seem to work either (I probably got the envoy filter wrong; new to them but I was trying an envoy.filters.http.jwt_authn filter with from_cookies; nice idea but I can't even tell if it's being called).
I'm baffled at this point. How do I store the user's jwt after the OIDC shuffle in such a way that the browser sends it back in a way that Istio is happy with? By default Istio really seems to want the Authorization header, but I'm not clear how to get it (or if that's desired). Seems like an obvious pattern but searching comes up surprisingly short, which makes me feel like I'm missing something Really Obvious.

Intermittent 403 CORS Errors (Access-Control-Allow-Origin) With Cloudfront Using Signed URLs To GET S3 Objects

In Brief
In order to keep the uploaded media (S3 objects) private for all the clients on my multi-tenant system I implemented a Cloudfront CDN deployment and configured it (and its Origin S3 Bucket) to force the use of signed URLs in order to GET any of the objects.
The Method
First, the user is authenticated via my system, and then a signed URL is generated and returned to them using the AWS.CloudFront.Signer.getSignedUrl() method provided by the AWS JS SDK. so they can make the call to CF/S3 to download the object (image, PDF, docx, etc). Pretty standard stuff.
The Problem
The above method works 95% of the time. The user obtains a signed URL from my system and then when they make an XHR to GET the object it's retrieved just fine.
But, 5% of the time a 403 is thrown with a CORS error stating that the client origin is not allowed by Access-Control-Allow-Origin.
This bug (error) has been confirmed across all environments: localhost, dev.myapp.com, prod.myapp.com. And across all platforms/browsers.
There's such a lack of rhyme or reason to it that I'm actually starting to think this is an AWS bug (they do happen, from time-to-time).
The Debugging Checklist So Far
I've been going out of my mind for days now trying to figure this out. Here's what I've attempted so far:
Have you tried a different browser/platform?
Yes. The issue is present across all client origins, browsers (and
versions), and all platforms.
Is your S3 Bucket configured for CORS correctly?
Yes. It's wide-open in fact. I've even set <MaxAgeSeconds>0</MaxAgeSeconds> in
order to prevent cacheing of any pre-flight OPTIONS requests by the
client:
Is the signed URL expired?
Nope. All of the signed URLs are set to expire 24hrs after generation. This problem has shown up even seconds
after any given signed URL is generated.
Is there an issue with the method used to generate the signed URLs?
Unlikely. I'm simply using the AWS.CloudFront.Signer.getSignedUrl()
method of their JS SDK. The signed URLs do work most of the time, so
it would seem very strange that it would be an issue with the signing
process. Also, the error is clearly a CORS error, not a signature
mis-match error.
Is it a timezone/server clock issue?
Nope. The system does serve users across many timezones, but that
theory proved to be false given that the signed URLs are all generated
on the server-side. The timezone of the client doesn't matter, it gets
a signed URL good for 24hrs from the time of generation no matter what
TZ it's in.
Is your CF distro configured properly?
Yes, so far as I can make out by following several AWS guides,
tutorials, docs and such.
Here's a screenshot for brevity. You can see that I've disabled
cacheing entirely in an attempt to rule that out as a cause:
Are you seeing this error for all mime-types?
No. This error hasn't been seen for any images, audio, or video files
(objects). With much testing already done, this error only seems to
show up when attempting to GET a document or PDF file (.doc, .docx,
.pdf). This lead me to believe that this was simply an Accept header
mis-match error: The client was sending an XHR with the the header
Accept: pdf, but really the signature was generated for Accept: application/pdf.
I haven't yet been able to fully rule this out as a
cause. But it's highly unlikely given that the errors are
intermittent. So if it were a Accept header mis-match problem then it
should be an error every time.
Also, the XHR is sending Accept: */* so it's highly unlikely this is where the issue is.
The Question
I've really hit a wall on this one. Can anyone see what I'm missing here? The best I can come up with is that this is some sort of "timing" issue. What sort of timing issue, or if it even is a timing issue, I've yet to figure out.
Thanks in advance for any help.
Found the solution for the same on serverfault.
https://serverfault.com/questions/856904/chrome-s3-cloudfront-no-access-control-allow-origin-header-on-initial-xhr-req
You apparently cannot successfully fetch an object from HTML and then
successfully fetch it again with as a CORS request with Chrome and S3
(with or without CloudFront), due to peculiarities in the
implementations.
Adding the answer from original post so that it does not get lost.
Workaround:
This behavior can be worked-around with CloudFront and Lambda#Edge, using the following code as an Origin Response trigger.
This adds Vary: Access-Control-Request-Headers, Access-Control-Request-Method, Origin to any response from S3 that has no Vary header. Otherwise, the Vary header in the response is not modified.
'use strict';
// If the response lacks a Vary: header, fix it in a CloudFront Origin Response trigger.
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
if (!headers['vary'])
{
headers['vary'] = [
{ key: 'Vary', value: 'Access-Control-Request-Headers' },
{ key: 'Vary', value: 'Access-Control-Request-Method' },
{ key: 'Vary', value: 'Origin' },
];
}
callback(null, response);
};

API Gateway Lambda CORS handler. Getting Origin securely

I want to implement CORS for multiple origins and I understand I need to do so via a lambda function as I cannot do that via the MOCK method
exports.handler = async (event) => {
const corsUrls = (process.env.CORS_URLS || '').split(',')
const requestOrigin = (event.headers && event.headers.origin) || ''
if (corsUrls.includes(requestOrigin)) {
return {
statusCode: 204,
headers: {
"Access-Control-Allow-Headers": 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,X-Requested-With',
'Access-Control-Allow-Origin': requestOrigin,
'Access-Control-Allow-Methods': 'POST,DELETE,OPTIONS'
}
}
}
return {
statusCode: 403,
body: JSON.stringify({
status: 'Invalid CORS origin'
})
}
}
Firstly, does the above looks ok? Then I am getting origin from headers event.headers.origin. But I find that I can just set that header manually to "bypass" cors. Is there a reliable way to detect the origin domain?
Firstly, does the above looks ok?
Your code looks good to me at first glance, and other than your point But I find that I can just set that header manually to "bypass" cors, I don't see any major problems with it.
Then I am getting origin from headers event.headers.origin. But I find that I can just set that header manually to "bypass" cors. Is there a reliable way to detect the origin domain?
The code you are currently using is the only way I can think of how to detect the origin domain off the top of my head. Although as you said, you can just set that header manually, and there is 0 assurances that header is correct or valid. It shouldn't be used as a layer of trust for security. For browsers, they restrict how this header can be set (see Forbidden header name). But if you control the HTTP client (ex. curl, postman, etc.) you can easily can send whatever headers you want. There is nothing technology wise preventing me from sending any headers with whatever values I want to your web server.
Therefor, at the end of the day, it might not be a huge concern. If someone tampers with that header, they are opening themselves up to security risks and unexpected behavior. There are a ton of ways to bypass CORS, like this, or this, or this maybe. So at the end of the day, it's possible to bypass CORS, despite your best efforts to enforce it. Although all of those tricks are hacks, and probably won't be used by normal users. Same with changing the origin header, not likely to be done by normal users.
There are a few other tricks you could look into tho to try to enforce it a little bit more. You could look into the refer header, and see if that is the same as the origin header. Again, possible to send anything for any header, but will make it a bit harder and enforce what you want a little bit more.
If you assume that your origin header should always equal the domain of your API Gateway API then the other thing you can look into is the event.requestContext object that API Gateway gives you. That object has resourceId, stage, accountId, apiId, and a few other interesting properties attached to it. You could look into building a system that will also verify those and based on those values, determine which API in API Gateway is making the request. This might require ensuring that you have separated out each domain into a separate API gateway API.
I don't see anyway those values in the event.requestContext could be tampered with tho since AWS sets them before passing the event object off to you. They are derived from AWS and can not be easily tampered with by a user (unless the entire makeup of the request changes). For sure a lot less tamperproof than headers which are just sent with the request, and AWS passes through to you.
Of course you can combine multiple of those solutions together to create a solution that enforces your policy more. Remember, security is a spectrum, so how far down that spectrum you go is up to you.
I would also encourage you to remember that CORS is not totally meant to hide information on the internet. Those methods I shared about how you can bypass CORS with a simple backend system, or plugin, show that it's not completely foolproof and if someone really wants to fake headers they will be able to. But of course at the end of the day you can make it as hard as possible for that to be achieved. But that requires implementing and writing a lot of code and doing a lot of checks to make that happen.
You really have to ask yourself what the objectives and goals are. I think that really determines your next steps. You might determine that your current setup is good enough and no further changes are necessary. You might determine that you are trying to protect sensitive data from being sent to unauthorized origins, which in that case CORS probably isn't a solid solution (due to the ability to set that header to anything). Or you might determine that you might wanna lock things down a bit more and use a few other signals to enforce your policy a bit more.
tldr You can for sure set the Origin header to anything you want, therefor it should not be completely trusted. If you assume that your origin header should always equal the domain of your API Gateway API, you can try to use the event.requestContext object to get more information about the API in API Gateway to gain more information about the request. You could also look into the Refer header to see if you can compare that against the Origin header.
Further information:
Is HTTP Content-Length Header Safe to Trust? (disclaimer: I posted this question on Stack Overflow a while back)
Example Amazon API Gateway AWS Proxy Lambda Event
Bypass CORS
https://github.com/Rob--W/cors-anywhere
https://chrome.google.com/webstore/detail/allow-control-allow-origi/nlfbmbojpeacfghkpbjhddihlkkiljbi
https://www.thepolyglotdeveloper.com/2014/08/bypass-cors-errors-testing-apis-locally/
Refer Header
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer
https://en.wikipedia.org/wiki/HTTP_referer
The only way to validate multiple origins is as you did, with your lambda read the Origin header, compare that to the list of domains you would like to allow, and if it matches, return the value of the Origin header back to the client as the Access-Control-Allow-Origin header in the response.
An information: the Origin header is one of the headers that are set automatically by the user agent.. so anyone can't altered programatically or through extensions. For more details, look at MDN.

Autodesk Data Management API 403-Error

I am trying to receive data via Autodesk Data Management API. So far I've created an Forge-App and connected it with a BIM360 Integration.
Then I wanted to get a list of all hubs, but when I do so, I receive an JSON-Object which contains a warning:
warnings: [{
"AboutLink":null,
"Detail":""You don't have permission to access this API",
"ErrorCode": "BIM360DM_ERROR",
"HttpStatusCode": "403",
...
}]
I called the webservice via AJAX wich looks like that:
this.getToken(function(token) {
$.ajax({
url: "https://developer.api.autodesk.com/project/v1/hubs",
beforeSend: function(xhr) {
xhr.setRequestHeader("Authorization", "Bearer "+token);
}
}).done(...);
The token is a 3-legged one. I am not sure which API I do not have permission for because I am pretty sure, that I have permission for BIM360.(I created the Integration as an administrator).
In addition to was ZHong mentioned, I would suggest you try this sample. It will ask you to provision your Forge Client ID under your BIM 360 settings, just follow the steps that the app will present.
On both 2- or 3-legged, the app accessing the data (Forge Client ID) needs authorization from the account admin. Without that, the Hubs endpoint will not return your BIM 360 hub, and inside that, the sample applies for Projects endpoint.
Does everything else work fine? For example, can you get all the hubs successfully? I just verified on my side, and I can see the response including the same warning as you mentioned, but the hubs are listed correctly, and you can get the projects/items/versions without problem. I pasted my postman response as follow.
If you check the blog https://forge.autodesk.com/blog/tutorial-using-curl-3-legged-authentication-bim-360-docs-upload, it also has the same warning, but seems no impact to the following operation. I am not exactly sure what the warning means, l will check and update the details, but so far, it seems you can ignore it for now.

sending data in a secure way

I want to send some data using GET over http. I want to decrypt or scramble it for security reasons so instead of sending: http://www.website.com/service?a=1&b=2&b=3
i want it to look like http://www.website.com/service?data=sdoicvyencvkljnsdpio
and inside the service to be able to decrypt the message and get the real data.
What is the best approach for this?
Thanks!
You can use SSL and certificates. You can see it works here: http://mattfleming.com/node/289. You can find various tutorials on how to do that based on for your specific web-server.
What laguage are you in? If php you could look up on the mcrypt functions.
But seriosly. Probably a better way for that would be to use HTTPS, which was designed for that.
I don't know about your application but it could have relevance.
Another common tequnique is the secure token teqnique where you basically generate a hash of your params and a secret token. The token is the only thing not included in the url. At the other end you re-create that hash with the same secret token and see if itmatches. This way youc an compile security methods like IP validation, time to live timestamps or signing a request by a user.
A more advanced method is the HTTP Digest authentication
SSL and POSTing the data would be a sensible way to approach this, but if you must do it with GET you can still keep it fairly secure
The MCrypt libraries for PHP are very good, then on the receiving page you would need a checksum to be absolutely sure that the string passed hasn't been tampered with.