How would I setup a jwcrypto token issuer for google cloud run with gRPC? - google-cloud-platform

I'm trying to create a custom authentication method for Google cloud endpoints. The idea being I can configure my ESPv2 container (an Extensible service proxy based on Envoy), which is hosted on Google cloud run, to obtain JWT's from a custom issuer, also hosted on cloud run.
Following the guide Endpoints guide for gRPC, I figure the jwks_uri: part of the yaml file should point to a URL which exposes the public key (which I figure you can do by putting a JWK into a json file and hosting said JSON file on google cloud storage, exposing it to the public internet).
The part that has me stumped is the issuer, I've gone through RFC7519, which states that the issuer is a string or URI value. I'm not very familiar with the specific implementation of Envoy that the ESPv2 container uses, but my best guess is the issuer: option in the yaml file is simply used to match against the domain or string that was issued from the server when the token was created.
I'm probably wrong so I'd really appreciate some guidance on this one.
Kind regards,
Despicable B

issuer should be the "iss" field in the JWT token that you send to ESPv2.

Author's Solution
After working with the Google Cloud endpoints team, and some of the contributors for ESPv2, we figured it out (as well as found a few things to point out to anyone wanting to do this in future).
Addressing the original question
Indeed as Wayne Zhang pointed out, the issuer can be ANY string value so long as it matches the "iss" claim in the JWT payload.
e.g.
authentication:
providers:
- id: some-fancy-id
issuer: fart # <-- Don't wrapping ANY these in double-quotes
jwks_uri: https://storage.googleapis.com/your-public-bucket/jwk-public-key.json
audiences: some-specific-name
and then in your (decoded) JWT
// Header
{
"alg": "RS256",
"kid": "custom-id-system-you-specify",
"typ": "JWT"
}
// Payload
{
"aud": [
"some-specific-name"
],
"exp": 1590139950, <-- MUST be INTEGER value
"iat": 1590136350, <-- ^^
"iss": "fart",
"sub": "Here is some sulphur dioxide"
}
Error/Bug #1 - "iat" and "exp" should be an integer NOT a string
As you can already see from the above decoded JWT, the "exp" and "iat" claims MUST be integer values (this can be seen clearly in the RFC7519 section 4.1.4 and 4.1.6).
This seems like a simple mistake, but as the ESPv2 contributors and I found, the error messages weren't particularly helpful at helping the developer figure out what the problem was.
For example, if you had written the "iat" and "exp" claims as strings rather than integers, the ESPv2 container would inform the dev that his JWT was either not proper Base64URL formatted or was invalid JSON. Which, to the unaware, might seem like you've used the library incorrectly.
Some changes to the error messages were made to address this in future, you can see the issue that was raised, and its conclusion here.
Error #2 - Wrong key, and JSON format
Before claiming victory over this battle of attrition, I ran into one more error which was just about as vague as the previous.
When trying to call a method that required authentication, I was greeted with the following
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "Jwks remote fetch is failed"
debug_error_string = "{"created":"#1590054504.221608572","description":"Error received from peer ipv4:216.239.36.53:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Jwks remote fetch is failed","grpc_status":16}"
>
Which you might think means that the ESPv2 couldn't retrieve your key.
The cause was three related issues.
ESPv2 only supports X509 and RSA key pairs, so don't make the same mistake I did and use EC generated key pairs.
jwcrypto does NOT add the "alg" and "kid" claims to your key file by default, make sure these are added or jwcrypto won't know what algorithm to use when signing any JWTs you've generated.
The final error was the format of the JSON file. When you call the methods to export the keys, you get the following:
{
"e":"XXXX",
"kty":"RSA",
"n":"crazyRandomNumbersAndLetters",
"alg": "RS256", <-- NOT ADDED BY DEFAULT
"kid": "custom-id-system-you-specify" <-- ^^
}
Simply providing a URL to this in a JSON file is incorrect. The proper format is as follows:
{
"keys": [
{
"e":"XXXX",
"kty":"RSA",
"n":"crazyRandomNumbersAndLetters",
"alg": "RS256", <-- NOT ADDED BY DEFAULT
"kid": "custom-id-system-you-specify" <-- ^^
}
]
}
If you do all this, it should be smooth sailing.
I apologise for getting a little off topic, but I hope others don't have to jump through as many hoops as I did to get this going :)
I can't thank the developers at ESPv2 enough for their quick replies and insight into the problem. Top job!
Best of luck coding!
Despicable B.

Related

IVS Token Authorisation

I hope I explain a bit clearly with the problems I'm having.
I used this guide ( https://catalog.us-east-1.prod.workshops.aws/v2/workshops/022adf04-0ff9-49af-848f-993e42575540/en-US/playauth) to generate a playback token , and after reading and following this entire guide, I was able to successfully generate a token.
"statusCode": 200,
"body": "{\"token\":\"eyJhbGciOiJFUzM4NCIsInR5cCI6IkpXVCJ9.xxxxxxxxxxxxxxxxxxxxxxm4iOiJhcm46YXdzOml2czpldS13ZXN0LTE6MDgxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxOmFjY2Vzcy1jb250cm9sLWFsbG93LW9yaWdpbiI6Imh0dHBzOi8vd3d3LmZvb3R5LnRvIiwiaWF0IjoxNjQ0MzUyMjI2LCJleHAiOjE2NDY5NDQyMjZ9.EQ1tnLU5uQhxnkVjJvrOo_z1Jlf4w0yMuhgWtB8ZBf_NKgWJCcMmToKia8u1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"}",
"headers": {
"Access-Control-Allow-Origin": "https://www.xxxxxx.com"
}
}
I put this token behind the stream url and everything works as it should work;
https://247dfhj3e56u467.us-xxxx-1.playback.live-video.net/api/video/v1/us-east-1.08xxxxxx06.channel.GpxxxxxxxxxxwA.m3u8?token=eyJhbGciOiJFUzM4NCIsInR5cCI6IkpXVCJ9.xxxxxxxxxxxxxxxxxxxxxxm4iOiJhcm46YXdzOml2czpldS13ZXN0LTE6MDgxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxOmFjY2Vzcy1jb250cm9sLWFsbG93LW9yaWdpbiI6Imh0dHBzOi8vd3d3LmZvb3R5LnRvIiwiaWF0IjoxNjQ0MzUyMjI2LCJleHAiOjE2NDY5NDQyMjZ9.EQ1tnLU5uQhxnkVjJvrOo_z1Jlf4w0yMuhgWtB8ZBf_NKgWJCcMmToKia8u1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This token only works on my domain, and when I use it on my other domain I get a CORS error because it only works on the domain I specified in the lambda function.
So so far the token generator works...
But guess what, as soon as there are people who can take the stream link through the source code, say use this link in VLC or any other m3u8 player, even some hls/m3u8 browser extensions in chrome can play it effortlessly.
My question to you is as follows;
Am I using the given token correctly?
Is there perhaps a lambda function script (json) that no longer enables these playback options?
Or can I solve this in another way so that the stream can only be played on my domain and not on a VLC player or browser extension.
Hopefully someone has a solution for this, because in this way the function of the token generator is not really valuable.
Sincerely.
Your use case seems correct.
Unfortunately, it's difficult to revoke the JWT token once you created. There is a property of expire time. Default value is 2 days in index.js and you can set shorter in case.
If I'd do, I'd integrate with some other systems like Cognito or IP based security groups.

How to use Mailgun sandbox?

I am currently testing Mailgun. Therefore, I don't want to add any payment information at the moment.
So, I'm working with the sandbox, and a verified address in the authorized recipients attached to the sandbox. So far, following the documentation, this limited setup is supposed to be working for testing purpose.
I use Postman to better identify how to work with the API, excluding any potential issues with coding.
Here is my Hello World config:
POST https://api:____my_API_Key___#api.mailgun.net/v3/sandboxXXXXX.mailgun.org/messages
The dashboard indicates that the sandbox is located in the US, so I don't use the european API.
Body:
from: postmaster#sandboxXXX.mailgun.org (also tried the verified email address, and postmaster <postmaster#sandbox...>)
to: bob#marley.com (the verified email address)
subject: test
text: Hello World!
I get a 400 error, Bad Request, and the documentation suggests to look for missing parameters.
The other posts I found so far did not help me to find the error spot either.
Also, Mailgun provides a Postman collection. But it did not help either.
Indeed, I dream of a detailed information of the API requirements, value formating... What are the required parameters if the error means I miss some?
Any idea of what I am missing?
Here is the solution.
I had to guess and analyze some examples from the provided Postman Collection to find out what the documentation is supposed to explain in the first place:
4 Required headers:
Authorization
Value : Basic XXXXX, where XXXXX is the Base 64 encoded version of api:___your_API_key___.
Content-Type
Value : multipart/form-data; boundary=XXX, where XXX is any short single string that will be used to identify a boundary within the sent content.
Content-Length
Value : XXX, where XXX is the size of the body request.
Host
Value : mydomain.com, your IP if sending from Postman...

Autodesk Data Management API 403-Error

I am trying to receive data via Autodesk Data Management API. So far I've created an Forge-App and connected it with a BIM360 Integration.
Then I wanted to get a list of all hubs, but when I do so, I receive an JSON-Object which contains a warning:
warnings: [{
"AboutLink":null,
"Detail":""You don't have permission to access this API",
"ErrorCode": "BIM360DM_ERROR",
"HttpStatusCode": "403",
...
}]
I called the webservice via AJAX wich looks like that:
this.getToken(function(token) {
$.ajax({
url: "https://developer.api.autodesk.com/project/v1/hubs",
beforeSend: function(xhr) {
xhr.setRequestHeader("Authorization", "Bearer "+token);
}
}).done(...);
The token is a 3-legged one. I am not sure which API I do not have permission for because I am pretty sure, that I have permission for BIM360.(I created the Integration as an administrator).
In addition to was ZHong mentioned, I would suggest you try this sample. It will ask you to provision your Forge Client ID under your BIM 360 settings, just follow the steps that the app will present.
On both 2- or 3-legged, the app accessing the data (Forge Client ID) needs authorization from the account admin. Without that, the Hubs endpoint will not return your BIM 360 hub, and inside that, the sample applies for Projects endpoint.
Does everything else work fine? For example, can you get all the hubs successfully? I just verified on my side, and I can see the response including the same warning as you mentioned, but the hubs are listed correctly, and you can get the projects/items/versions without problem. I pasted my postman response as follow.
If you check the blog https://forge.autodesk.com/blog/tutorial-using-curl-3-legged-authentication-bim-360-docs-upload, it also has the same warning, but seems no impact to the following operation. I am not exactly sure what the warning means, l will check and update the details, but so far, it seems you can ignore it for now.

API Console Issue

I've been using WSO2 API Manager 1.9.1 for the past month on a static IP and we liked it enough to put it on Azure behind a full qualified domain name. As we are still only using for internal purposes, we shut the VM down during off hours to save money. Our Azure setup does not guarantee the same IP address each time the VM restarts. The FQDN allows us to always reach https://api.mydomain.com regardless of what happens with the VM IP.
I updated the appropriate config files to the FQDN and everything seems to be working well. However! The one issue I have and cannot seem to resolve is calling APIs from the API consoloe. No matter what I do, I get a response as below
Response Body
no content
Response Code
0
Response Headers
{
"error": "no response from server"
}
Mysteriously, I can successfully make the same calls from command line or SOAPUI. So it's something unique about the API Console. I can't seem to find anything useful in the logs or googling. I do see a recurring error but it's not very clear or even complete (seems to cut off).
[2015-11-17 21:33:21,768] ERROR - AsyncDataPublisher Reconnection failed for
Happy to provide further inputs / info. Any suggestions on root cause or where to look is appreciated. Thanks in advance for your help!
Edit#1 - adding screenshots from chrome
The API Console may not be giving you response due to following issues
If you are using https, you have to type the gateway url in browser and accept it before invoke the API from the API Console (This case there is no signed certificate in the gateway)
CORS issue which may due to your domain is not in access allow origins response of Options call
If you create a API which having https backend. You have to import endpoint SSL certificate to client-trustore.jks

Need to handle 404 errors when using the Google Admin SDK and searching for users

I am having trouble catching error when using the Google Admin SDK and .NET. There was not much information on practical examples using this on Google's website, that I could find.
Alas, I have working code that will search a Google Apps domain for a user account and return properties about that user account. What I do not have is error handling for scenarios when the searched for user account is not found (indicating that the user does not exist).
It seems this situation is returned as an HTTP 404 response. Great. No problem in that, but I cannot seem to handle this gracefully in my app. IIS is simply throwing a "The resource could not be found" error mesage at /xxx/appname/DefaultRedirectErrorPage.aspx. I am not sure where this is coming from as it is not defined in web.config or default.asax.
If there was a practical example of how to handle such a simple response from Google, using .NET, that would be great.
I dont know anything about .NET library... BUT I have worked with the underlying HTTP requests (I'm using PHP). The API may not be returning what you think it should...
If there is an error, the API should return an "error" object in the JSON response. I check for that when looking for errors.
In my interaction with the directory API, when retrieving a user that does not exist the JSON response does not contain a user object. Even with NO users returned it DOES return a 200 response, not an error.
for example, if I search for a user that does NOT exist, Google sends back something like this:
{
"kind": "admin#directory#users",
"etag": "\"LOTSOFCHARACTERSHERETHATAREAKEYOFSOMEKIND\""
}
when I search a valid user, the JSON I get back is like this:
{
"kind": "admin#directory#users",
"etag": "\"LOTSOFCHARACTERSHERETHATAREAKEYOFSOMEKIND\""
"users": [
"kind": "admin#directory#user",
"id": "109020414202478541143",
// lots of other properties here //
]
}
Are you using Google's .NET library or your own? Either way, I would check to make sure that the user object you're trying to access actually exists when an invalid user is searched.
hope that helps.
A couple years late but I was trying to figure out the same thing and found that Google doesn't have any error handling documentation around their Admin SDk. I was able to work around it with a try...catch in Google Apps Script.
try {
var page = AdminReports.CustomerUsageReports.get(date, {
parameters: parameters.join(','),
maxResults: 500,
customerId: customerId
});
} catch (error) {
console.log(error.code);
console.log(error.message);
}