Autodesk Data Management API 403-Error - autodesk-data-management

I am trying to receive data via Autodesk Data Management API. So far I've created an Forge-App and connected it with a BIM360 Integration.
Then I wanted to get a list of all hubs, but when I do so, I receive an JSON-Object which contains a warning:
warnings: [{
"AboutLink":null,
"Detail":""You don't have permission to access this API",
"ErrorCode": "BIM360DM_ERROR",
"HttpStatusCode": "403",
...
}]
I called the webservice via AJAX wich looks like that:
this.getToken(function(token) {
$.ajax({
url: "https://developer.api.autodesk.com/project/v1/hubs",
beforeSend: function(xhr) {
xhr.setRequestHeader("Authorization", "Bearer "+token);
}
}).done(...);
The token is a 3-legged one. I am not sure which API I do not have permission for because I am pretty sure, that I have permission for BIM360.(I created the Integration as an administrator).

In addition to was ZHong mentioned, I would suggest you try this sample. It will ask you to provision your Forge Client ID under your BIM 360 settings, just follow the steps that the app will present.
On both 2- or 3-legged, the app accessing the data (Forge Client ID) needs authorization from the account admin. Without that, the Hubs endpoint will not return your BIM 360 hub, and inside that, the sample applies for Projects endpoint.

Does everything else work fine? For example, can you get all the hubs successfully? I just verified on my side, and I can see the response including the same warning as you mentioned, but the hubs are listed correctly, and you can get the projects/items/versions without problem. I pasted my postman response as follow.
If you check the blog https://forge.autodesk.com/blog/tutorial-using-curl-3-legged-authentication-bim-360-docs-upload, it also has the same warning, but seems no impact to the following operation. I am not exactly sure what the warning means, l will check and update the details, but so far, it seems you can ignore it for now.

Related

Geolocation service with AWS API Gateway and Lambda

What we are trying to do
We are trying to set up a very simple Geolocation service with API Gateway and Lambda.
Very similar to https://ipstack.com/, but we don't want to use an external service as we believe it could be an issue in some jurisdictions to send a non-anonymized IP address to a service we don't control (before getting the user's consent).
Would like to have a simple api https://location.my-site.com that returns the country (for GDPR, cookies, etc purposes).
Now it seems that there is a light Cloudfront behind API Gateway that would produce the header "Cloudfront-Viewer-Country", which would be very simple and achieve what we need. i.e. lambda receives Cloudfront-Viewer-Country and just sends it back.
What we have tried
I have seen solutions such as this one: Build a Geolocation API using AWS Lambda and MaxMind, but I struggle to see why deploying an RDS and maintaining the MaxMind database would make sense for us, if it is already available from Cloudfront-Viewer-Country.
I have seen this question: Accessing cloudfront-viewer-country header in AWS API Gateway using HTTP Proxy?, and tried implementing the answer from Michael - sqlbot. But I cannot seem to access the headers.
I have also tried what is suggested in this post, but I can't seem to access the value of Cloudfront-Viewer-Country either.
What we are doing (in conjunction with 'What we have tried')
To access and check if the header is available I am using the following python lambda function
import json
def lambda_handler(event, context):
response = {
'status': '200',
'statusDescription': 'Found',
'headers': {
'location' : [ {
'event': json.dumps(event)
} ]
}
}
return response
What the problem is
but the event json dump doesn't contain Cloudfront-Viewer-Country.
I suspect I'm doing something wrong but I really can't figure it out. Any pointer would be very much appreciated.
Thank you
I was able to get access to Cloudfront-Viewer-Country by setting a Endpoint Type = Edge optimized.
I could not get it to work with Endpoint Type = Regional or with http api gateway.

How would I setup a jwcrypto token issuer for google cloud run with gRPC?

I'm trying to create a custom authentication method for Google cloud endpoints. The idea being I can configure my ESPv2 container (an Extensible service proxy based on Envoy), which is hosted on Google cloud run, to obtain JWT's from a custom issuer, also hosted on cloud run.
Following the guide Endpoints guide for gRPC, I figure the jwks_uri: part of the yaml file should point to a URL which exposes the public key (which I figure you can do by putting a JWK into a json file and hosting said JSON file on google cloud storage, exposing it to the public internet).
The part that has me stumped is the issuer, I've gone through RFC7519, which states that the issuer is a string or URI value. I'm not very familiar with the specific implementation of Envoy that the ESPv2 container uses, but my best guess is the issuer: option in the yaml file is simply used to match against the domain or string that was issued from the server when the token was created.
I'm probably wrong so I'd really appreciate some guidance on this one.
Kind regards,
Despicable B
issuer should be the "iss" field in the JWT token that you send to ESPv2.
Author's Solution
After working with the Google Cloud endpoints team, and some of the contributors for ESPv2, we figured it out (as well as found a few things to point out to anyone wanting to do this in future).
Addressing the original question
Indeed as Wayne Zhang pointed out, the issuer can be ANY string value so long as it matches the "iss" claim in the JWT payload.
e.g.
authentication:
providers:
- id: some-fancy-id
issuer: fart # <-- Don't wrapping ANY these in double-quotes
jwks_uri: https://storage.googleapis.com/your-public-bucket/jwk-public-key.json
audiences: some-specific-name
and then in your (decoded) JWT
// Header
{
"alg": "RS256",
"kid": "custom-id-system-you-specify",
"typ": "JWT"
}
// Payload
{
"aud": [
"some-specific-name"
],
"exp": 1590139950, <-- MUST be INTEGER value
"iat": 1590136350, <-- ^^
"iss": "fart",
"sub": "Here is some sulphur dioxide"
}
Error/Bug #1 - "iat" and "exp" should be an integer NOT a string
As you can already see from the above decoded JWT, the "exp" and "iat" claims MUST be integer values (this can be seen clearly in the RFC7519 section 4.1.4 and 4.1.6).
This seems like a simple mistake, but as the ESPv2 contributors and I found, the error messages weren't particularly helpful at helping the developer figure out what the problem was.
For example, if you had written the "iat" and "exp" claims as strings rather than integers, the ESPv2 container would inform the dev that his JWT was either not proper Base64URL formatted or was invalid JSON. Which, to the unaware, might seem like you've used the library incorrectly.
Some changes to the error messages were made to address this in future, you can see the issue that was raised, and its conclusion here.
Error #2 - Wrong key, and JSON format
Before claiming victory over this battle of attrition, I ran into one more error which was just about as vague as the previous.
When trying to call a method that required authentication, I was greeted with the following
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "Jwks remote fetch is failed"
debug_error_string = "{"created":"#1590054504.221608572","description":"Error received from peer ipv4:216.239.36.53:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Jwks remote fetch is failed","grpc_status":16}"
>
Which you might think means that the ESPv2 couldn't retrieve your key.
The cause was three related issues.
ESPv2 only supports X509 and RSA key pairs, so don't make the same mistake I did and use EC generated key pairs.
jwcrypto does NOT add the "alg" and "kid" claims to your key file by default, make sure these are added or jwcrypto won't know what algorithm to use when signing any JWTs you've generated.
The final error was the format of the JSON file. When you call the methods to export the keys, you get the following:
{
"e":"XXXX",
"kty":"RSA",
"n":"crazyRandomNumbersAndLetters",
"alg": "RS256", <-- NOT ADDED BY DEFAULT
"kid": "custom-id-system-you-specify" <-- ^^
}
Simply providing a URL to this in a JSON file is incorrect. The proper format is as follows:
{
"keys": [
{
"e":"XXXX",
"kty":"RSA",
"n":"crazyRandomNumbersAndLetters",
"alg": "RS256", <-- NOT ADDED BY DEFAULT
"kid": "custom-id-system-you-specify" <-- ^^
}
]
}
If you do all this, it should be smooth sailing.
I apologise for getting a little off topic, but I hope others don't have to jump through as many hoops as I did to get this going :)
I can't thank the developers at ESPv2 enough for their quick replies and insight into the problem. Top job!
Best of luck coding!
Despicable B.

Cannot get AWS API Gateway to override response codes - FitBit endpoint verification

I am integrating FitBit with my company's platform and we are switching over from syncing with our own server to sending the data to an AWS Kinesis stream. This requires us to also set up an AWS API Gateway with a POST method to write the data to the stream. I've also set up a GET method on the same resource for the verification process.
Here's the problem I'm facing:
Once I have the API Endpoint properly set up, FitBit provides a verification code and requires a verification process in which it sends a GET request to the endpoint with a ?verify={correctVerificaitonCode} query param and wants a 204 response, and one with a ?verify={incorrectVericationCode} param and wants a 404 response. This would obviously be easy for me to accomplish in our Rails backend, where I'm in control of the code, but on AWS it's a tangled mess with little control.
I have read endless documentation on AWS about Mapping Templates and Integration Response, but no matter what I do, I cannot get the API to respond with anything other than a 200 (when the request is clean and has any ?verify param) or 500 (when I purposefully make a bad request). There is no straightforward answer in the AWS docs about this.
This is the closest I have come to a setup that the docs promise should work, yet it does not:
Using the Integration Response HTTP Status Regex
And with this mapping template
I'm two days in on this and frustrated to my wits' end. Help!
Just in case anyone find this thread in the future and is struggling with the same issue - here is how you verify a FitBit Developer API app with an Amazon Kinesis stream being fed by an AWS API Gateway:
First, set up the POST method of your API - there are AWS guides to this. Select AWS service as the integration type and kinesis as the service, then set up a mapping template for 'application/json' to look like this:
#set($event = $input.body)
#set($data = '{"action":' + $event +', "authorization": "' + $input.params('Authorization') + '", "stage":"' + $context.stage + '"}')
#set($body = $util.base64Encode($data))
{
"Data": "$body",
"PartitionKey": "shard-1",
"StreamName": "gm-fitbit"
}
Once you've done that, create a GET method on the same resource. Set MOCK as the integration type and create the endpoint. Now click on the GET method and visit Method Request. Expand URL Query String Parameters and add verify as a query param. Now, go back to the method and visit Integration Response.
Under the already existing 200 response method, expand it and add an HTTP status regex of 2\d{2} and passthrough handling.
Expand Mapping Templates, and for 'application/json' create this mapping template:
{
#if( $input.params('verify') == "theVerificationCodeProvidedToYouByFitbit" )
#set($context.responseOverride.status = 204)
#else
#set($context.responseOverride.status = 404)
#end
}
That's it! Deploy the API again, head back to Fitbit, and click verify!
There. Now there is officially a guide online to integrating Fitbit with an AWS Kinesis stream, the one I wish I had when struggling with this for 3 days.
Cheers!

API Gateway and Swagger UI

I currently have an API with multiple resources within our API Gateway, I'm working on a a Swagger UI page for the same.
A lot of the swagger ui definition I have written myself, although I know there is export functionality - it doesn't seem 100% ready yet.
When using the Try it out button from swagger, if I get back a 200, this is handled perfectly and the result shows as expected. I am trying to get the error codes to display and this is where I am stuck.
In API Gateway I have create a 401 in my Method response, including some allowed headers, this is being used in my Integration Response for HTTP status regex of Unauthorized.* - the content of this is just being passed through.
My swagger UI definition response looks like this (for the 401 in particular):
"401": {
"description": "Unauthorized",
"headers": {
"Access-Control-Allow-Origin": {
"type": "string"
}
}
},
This is just expecting a string response. My `produces' has everything needed, just in case;
"produces": ["application/json",
"text/json",
"application/xml",
"text/xml"],
The result through the inspector is correct -
However my swagger still produces this:
I have tried a multitude of things, including assigning an object to the response type, changing my regex expression to 4\d{2} to catch any 400 error, updated the definition produces - all with no luck.
Let me know if there is any other information needed to help.
I'm assuming you're using a custom authorizer or Cognito authorizer and this is what is producing the 401?
Unfortunately there is a known limitation where error methods, such as 401 or 403 from an authorizer, will not contain configured header mappings.

WSO2 - simple endpoint fails

I am trying to setup a simple API test against a local endpoint. I have create the sample API (phone number lookup) and that works fine.
http://192.168.1.11:8080/api/simpleTest is my endpoint and the WSO2 service also runs on 192.168.1.11 ... but when I test it in 'publisher', it always fails. This is a simple GET with no parameters.
I can run it from a browser or CURL (outside of WSO2) and it works fine.
Thanks.
I assume you talk about clicking the Test button when providing Backend Endpoint in API publisher.
The way that Test button works at the moment (as far as I understand) is that it invokes HTTP HEAD method on the endpoint provided (because according to RFC 2616, "This method is often used for testing hypertext links for validity, accessibility, and recent modification.")
Then it checks response. If response is valid or 405 (method not allowed), then the URL is marked as Valid.
Thus sometimes, if backend is not properly following RFC, you might get otherwise working URLs declared as Invalid during the test because of that improper HEAD response evaluation. Obviously, this is just a check for your convenience and you can ignore the check if you know the endpoint works for the methods and resources you need it to work.
P.S. Checked it on API Cloud but behavior is identical to downloadable API Manager.