Alexa is 'Unable to Reach the Requested Skill' - amazon-web-services

I'm receiving "I'm unable to reach the requested skill" message from an Alexa dashboard testing console for the skill that used to work before (with no modifications to any underlying infrastructure or code).
Here's the error obtained from Alexa's device logs:
{
"header": {
"namespace": "SkillDebugger",
"name": "CaptureError",
"messageId": "57d00be6-19d6-4529-b0b1-4c5d6c2760ac"
},
"payload": {
"skillId": "amzn1.ask.skill.db1bac88-183d-409c-9d3e-0e69fa0f5fe2",
"timestamp": "2018-09-27T19:11:51.066Z",
"dialogRequestId": "d9ec106d-2ef2-4526-a156-f4714ce5d034",
"skillRequestId": "amzn1.echo-api.request.1e166266-56e1-4c51-b40a-3ceb144f997f",
"code": "SKILL_ENDPOINT_ERROR",
"description": "An error occurred while issuing a SpeechletRequest for (requestId [amzn1.echo-api.request.1e166266-56e1-4c51-b40a-3ceb144f997f]",
"debuggingInfo": {
"type": "SkillExecutionInfo",
"content": {
"invocationRequest": {
"endpoint": "https://emptio.serveo.net/abc/api/v1/alexa",
"body": {
"version": "1.0",
"session": {
"new": false,
"sessionId": "amzn1.echo-api.session.bfc02d53-fe83-4c70-b731-ea7ede99d20a",
"application": {
"applicationId": "amzn1.ask.skill.db1bac88-183d-409c-9d3e-0e69fa0f5fe2"
},
"user": {
"userId": "amzn1.ask.account.AGX2NO3NXXDS6NLEZMDZXMRZZPJ3DLEERYK7J3NUPFUYRADFB2HRILB7BZVTN336OFVSNFFUP3VDVFHERK5PKQE5H32EQ5GGWTT67EMDQKP22Q7NTXXNYDUTYNCYI6EJUEODQ54VHKW4JSWVCS7JINWLYH2LICQVETFGZBY6NBDJVEX66VCGCZMRTFZYAG2E3IXDPMPVF3U4VMY",
"accessToken": "Atza|IwEBIM_YZylf-iVoydW0WhXTS4ykk6oA0FwI9Aa7Pdz_pysLPaL1AJwQLXA-Y1GJabHTWMJxfDEKyIiLFuxEPnTxuYaEDyany7WXzHMOd0-iiD9lYBxE6rIXkC3Z-I5PYU6DQtkT6DHxbusrkyGTb1bSfbznIaaFat3yNvKY9mXaNHEEhuuPRZJkXjffBA9WKzWrkGetOdHVvo-PLw2w9rWUiQQuJ6ryzQjugYILyCuTry3qz8lvqWGxYX0XB3dx_CGuzjEnNP0-X2ozhLXN8cBjtBrl7MlTffNyo6K94vi24-16bdIdFZG3mVL_bKSCXzAx2qzPJvBCn953FrPVw9zd7CtOintRSBDZ9Aw_QgKqTklliWTBP_8uRqq_nuMB8s992-Yhi6Zb-k7VvyYp7oLtJ8ggRqRlRk9vS4HBxyfKCxvfXmvlmZJlAtGjec_-Bx8UB2pf1ZH0xi-2LYpezVh2e7dgWenKU0PHvtduprVtpO4E72148mddcYyQRzAEdk8LYQx1SiamYY64_qmkv14h1qBPUIQPuv3MFt2PB7Mhm6cVTA"
}
},
"context": {
"System": {
"application": {
"applicationId": "amzn1.ask.skill.db1bac88-183d-409c-9d3e-0e69fa0f5fe2"
},
"user": {
"userId": "amzn1.ask.account.AGX2NO3NXXDS6NLEZMDZXMRZZPJ3DLEERYK7J3NUPFUYRADFB2HRILB7BZVTN336OFVSNFFUP3VDVFHERK5PKQE5H32EQ5GGWTT67EMDQKP22Q7NTXXNYDUTYNCYI6EJUEODQ54VHKW4JSWVCS7JINWLYH2LICQVETFGZBY6NBDJVEX66VCGCZMRTFZYAG2E3IXDPMPVF3U4VMY",
"accessToken": "Atza|IwEBIM_YZylf-iVoydW0WhXTS4ykk6oA0FwI9Aa7Pdz_pysLPaL1AJwQLXA-Y1GJabHTWMJxfDEKyIiLFuxEPnTxuYaEDyany7WXzHMOd0-iiD9lYBxE6rIXkC3Z-I5PYU6DQtkT6DHxbusrkyGTb1bSfbznIaaFat3yNvKY9mXaNHEEhuuPRZJkXjffBA9WKzWrkGetOdHVvo-PLw2w9rWUiQQuJ6ryzQjugYILyCuTry3qz8lvqWGxYX0XB3dx_CGuzjEnNP0-X2ozhLXN8cBjtBrl7MlTffNyo6K94vi24-16bdIdFZG3mVL_bKSCXzAx2qzPJvBCn953FrPVw9zd7CtOintRSBDZ9Aw_QgKqTklliWTBP_8uRqq_nuMB8s992-Yhi6Zb-k7VvyYp7oLtJ8ggRqRlRk9vS4HBxyfKCxvfXmvlmZJlAtGjec_-Bx8UB2pf1ZH0xi-2LYpezVh2e7dgWenKU0PHvtduprVtpO4E72148mddcYyQRzAEdk8LYQx1SiamYY64_qmkv14h1qBPUIQPuv3MFt2PB7Mhm6cVTA"
},
"device": {
"deviceId": "amzn1.ask.device.AGUTTO7VCXPCUUSXNDCNO6LK7LZHUKPDGZBOXUOBNRNOBGD7FHBJWHOK3LJNQX4U47HTFLUXJ6MHBL6V7UCDNTWOMBJIP5R4R2ZVK3XJX42PEZG6J6TCS3U7NSYZZ3PDCUSH22CY7LYGNIK2MGXCUGR4ITQQ",
"supportedInterfaces": {}
},
"apiEndpoint": "https://api.amazonalexa.com",
"apiAccessToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjEifQ.eyJhdWQiOiJodHRwczovL2FwaS5hbWF6b25hbGV4YS5jb20iLCJpc3MiOiJBbGV4YVNraWxsS2l0Iiwic3ViIjoiYW16bjEuYXNrLnNraWxsLmRiMWJhYzg4LTE4M2QtNDA5Yy05ZDNlLTBlNjlmYTBmNWZlMiIsImV4cCI6MTUzODA3OTEwNywiaWF0IjoxNTM4MDc1NTA3LCJuYmYiOjE1MzgwNzU1MDcsInByaXZhdGVDbGFpbXMiOnsiY29uc2VudFRva2VuIjpudWxsLCJkZXZpY2VJZCI6ImFtem4xLmFzay5kZXZpY2UuQUdVVFRPN1ZDWFBDVVVTWE5EQ05PNkxLN0xaSFVLUERHWkJPWFVPQk5STk9CR0Q3RkhCSldIT0szTEpOUVg0VTQ3SFRGTFVYSjZNSEJMNlY3VUNETlRXT01CSklQNVI0UjJaVkszWEpYNDJQRVpHNko2VENTM1U3TlNZWlozUERDVVNIMjJDWTdMWUdOSUsyTUdYQ1VHUjRJVFFRIiwidXNlcklkIjoiYW16bjEuYXNrLmFjY291bnQuQUdYMk5PM05YWERTNk5MRVpNRFpYTVJaWlBKM0RMRUVSWUs3SjNOVVBGVVlSQURGQjJIUklMQjdCWlZUTjMzNk9GVlNORkZVUDNWRFZGSEVSSzVQS1FFNUgzMkVRNUdHV1RUNjdFTURRS1AyMlE3TlRYWE5ZRFVUWU5DWUk2RUpVRU9EUTU0VkhLVzRKU1dWQ1M3SklOV0xZSDJMSUNRVkVURkdaQlk2TkJESlZFWDY2VkNHQ1pNUlRGWllBRzJFM0lYRFBNUFZGM1U0Vk1ZIn19.LzPCt8QPxkEa5jFK3IMGMQLWS3vXOopyGKBu0cAy1cnJzAk7wnbKwc9eyQYDMr3uH7MyHr4s7xUKpWlvspGOAL3LqKxFbxqpB5zIjhKifqdGQhB_nurOAjeyZOipZ0ZhSuPN9fqTwp7zwca4LdYz6Kuahklz7D7pU7ICNI1DNqNKDx9HmyWbJIwXWL3MvS9sEujDo15oTdiueNaCbC7kLnPi0adrukHy3J6HVN_XjWS5mSSawuObgiT2b9eLm4qntoMG7MnDTSrzxmhKgXm3WrbFxRW_ZKE3uu1wa7-412f8DPxvbVZkeYDRwWMTO8s7BtnzjPcKEcT6daLXKRgpVw"
}
},
"request": {
"type": "SessionEndedRequest",
"requestId": "amzn1.echo-api.request.1e166266-56e1-4c51-b40a-3ceb144f997f",
"timestamp": "2018-09-27T19:11:47Z",
"locale": "en-US",
"reason": "ERROR",
"error": {
"type": "INVALID_RESPONSE",
"message": "An exception occurred while dispatching the request to the skill."
}
}
}
},
"invocationResponse": null,
"metrics": {
"skillExecutionTimeInMilliseconds": 3107
}
}
}
}
}
As is seen from the above response, the skill is configured with an endpoint: https://emptio.serveo.net/abc/api/v1/alexa which is perfectly reachable.
Again, the same exact skill used to work just yesterday. The invocation name under which I am calling it used to work fine.
I'm able to reach and verify the above endpoint is functional and responsive outside Alexa, but it's somehow not reachable from the Alexa dashboard.
I'm monitoring the logs from Serveo - they don't show any activity, meaning that something is broken before the webhook is called.
What could be the reason for the error? How can I debug what is going on in the Alexa stack?

Make sure option for the endpoint's SSL certificate type is correct. You can change this in the endpoint option in the web development console.
Your host might be using a wildcard certificate so select the wildcard option, save your endpoint again - then test again using the console for reachability.

Related

Firebase functions deploy: The service has encountered an error during container import

We're trying to deploy firebase functions and we continuously have this error: The service has encountered an error during container import. Please try again late
Here's a video overview: https://www.loom.com/share/9afb2facb5e3461ebef74e7e802a2761
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 14,
"message": "The service has encountered an error during container import. Please try again later"
},
"authenticationInfo": {},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/voypost-matching-prod/locations/europe-west3/functions/createFromJobPubSub"
},
"insertId": "-wq3kwnb2c",
"resource": {
"type": "cloud_function",
"labels": {
"project_id": "voypost-matching-prod",
"function_name": "createFromJobPubSub",
"region": "europe-west3"
}
},
"timestamp": "2023-01-07T16:39:01.688761Z",
"severity": "ERROR",
"logName": "projects/voypost-matching-prod/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/dm95cG9zdC1tYXRjaGluZy1wcm9kL2V1cm9wZS13ZXN0My9jcmVhdGVGcm9tSm9iUHViU3ViL0pOaGkxYW9zMWxj",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2023-01-07T16:39:02.021123649Z"
}
As per this doc, it may be due to non-utf8 characters, but there are none (checked using https://stackoverflow.com/a/41741313 grep -axv '.*' ./lib/**/*.js)
It failed 3 times in a row and continues failing. And every time it fails on the same functions.
There is always only one deployment ongoing - we don't run multiple firebase functions deploy at the same time.
The original discussion is on their github, but we were refered here.

All my Cloud Functions say, function is active but last deploy failed

Facing this issue with my Google Cloud Functions where from the very first function that I deployed to the ones I'm to upgrade today, are all saying the same thing on their status.
"Function is active, but the last deploy failed"
What may this be?
Here's the log visible for updating the function on the log explorer.
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "start#pyme.team"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/pyme-webapp/locations/us-central1/functions/applicationSubmitted"
},
"insertId": "d1k3hyd3jfe",
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "applicationSubmitted",
"project_id": "pyme-webapp"
}
},
"timestamp": "2022-02-02T20:23:05.726462Z",
"severity": "NOTICE",
"logName": "projects/pyme-webapp/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cHltZS13ZWJhcHAvdXMtY2VudHJhbDEvYXBwbGljYXRpb25TdWJtaXR0ZWQvaWdGS2o4bXpjbDA",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2022-02-02T20:23:06.263576440Z"
}
Similarly, all I see on the log in the function itself is:
Image of the Function Log itself available
The exact error that I am seeing and am concerned about and with is this: Function Error with ORANGE HAZARD on update
Attaching another, even more detailed update log as well.
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "start#pyme.team"
},
"requestMetadata": {
"callerIp": "80.83.136.68",
"callerSuppliedUserAgent": "FirebaseCLI/10.0.1,gzip(gfe),gzip(gfe)",
"requestAttributes": {
"time": "2022-02-02T20:21:00.491300Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"authorizationInfo": [
{
"resource": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"permission": "cloudfunctions.functions.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"request": {
"updateMask": "name,sourceUploadUrl,entryPoint,runtime,labels,httpsTrigger,availableMemoryMb,environmentVariables,sourceToken",
"function": {
"runtime": "nodejs16",
"availableMemoryMb": 512,
"entryPoint": "workContracts",
"name": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"sourceUploadUrl": "https://storage.googleapis.com/gcf-upload-us-central1-d393f99f-6b88-4b68-8202-d75b734aa7a1/64b2646f-35b6-4919-8e89-c662fc29f01f.zip?GoogleAccessId=service-748321615979#gcf-admin-robot.iam.gserviceaccount.com&Expires=1643835053&Signature=McjqD9mmo%2F1wLbvO6SklkHi%2B34nQEwcpz7cLOLNAF4RwG8bpHh8RThxFJwnGZo1F92iQnquRQyGYbJFuihP%2FUGrgW7cG6GmhVq2gkugDywngZXT9d7UTBG0wgKF29XcbZkwV3IX7oKKiUwf6Q6mzCOOoCrjc5LBxqJo9WvWDZynv8R75nVZTZ5IhekMdqAw%2BRvIBvooXa%2BuA3Sezhh%2Bz2BR1XtIyS21CY%2FkoPDaKPwvftr3%2Fjcyuzb2V39%2BSajQg3t0U7Gt6oSch9qUhl6gnknr6wphFGmC7t7h9l0LUbjHUDuaMNNoB1LXxI30CRNkRupf9XBKTKpKMf%2F0nAAMltA%3D%3D",
"httpsTrigger": {},
"labels": {
"deployment-tool": "cli-firebase"
}
},
"#type": "type.googleapis.com/google.cloud.functions.v1.UpdateFunctionRequest"
},
"resourceLocation": {
"currentLocations": [
"us-central1"
]
}
},
"insertId": "1g6c2gwd46lm",
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "workContracts",
"project_id": "pyme-webapp"
}
},
"timestamp": "2022-02-02T20:21:00.307699Z",
"severity": "NOTICE",
"logName": "projects/pyme-webapp/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cHltZS13ZWJhcHAvdXMtY2VudHJhbDEvd29ya0NvbnRyYWN0cy96bHlTLUtwbzI2VQ",
"producer": "cloudfunctions.googleapis.com",
"first": true
},
"receiveTimestamp": "2022-02-02T20:21:00.985842395Z"
}
If this isn't the log to look for, just let me know what to find but I'd appreciate the help.
So turns out today morning, I login and check and everything is fine. I still have no logs stating the exact cause of the error but the same functions, the same code and the exact same deployment methods have worked and the function seems to be working fine.
This is concerning as separate cloud functions should never ever be changing on deployments.
A cloud function which takes in a POST METHOD and send data to SendGrid for example has nothing to do with a cloud function triggered by updates to the Firestore Database and if they're both deployed since the 5th of January and never touched again (in terms of edits), they should not be showing the same deployment error message across the board.
my temporal solution is to delete the function then deploy. It seems like it cannot be deployed while in use, i'm sorry i couldn't provide a better solution i will edit it as soon as possible.

AWS LexV2 messenger channel integration not sending requestAttributes

I am building a AWS LexV2 chat bot that I want to integrate with Facebook Messenger via Channel Integration provided by LexV2. I am also using Lambda codehook to validate my inputs and fulfil my intents.
Following the docs, everything works as expected, except for one thing. When I log the event in my lambda function, requestAttributes field is missing from the object
My lambda function code:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
def lambda_handler(event, context):
logger.error(event)
return ...
When I send a message to my app/page on facebook, this is what is logged:
{
"sessionId": "session-id",
"inputTranscript": "Intent utterance",
"interpretations":
[
{
"intent":
{
"slots":
{
"slot1": null,
"slot2": null
},
"confirmationState": "null",
"name": "IntentName",
"state": "InProgress"
},
"nluConfidence": 1.0
},
{
"intent":
{
"slots":
{},
"confirmationState": "null",
"name": "FallbackIntent",
"state": "InProgress"
}
}
],
"responseContentType": "text/plain; charset=utf-8",
"invocationSource": "DialogCodeHook",
"messageVersion": "1.0",
"sessionState":
{
"intent":
{
"slots":
{
"slot1": null,
"slot2": null
},
"confirmationState": "null",
"name": "IntentName",
"state": "InProgress"
},
"originatingRequestId": "req-id"
},
"bot":
{
"aliasId": "ALIASID",
"aliasName": "AliasName",
"name": "BotName",
"version": "DRAFT",
"localeId": "en_US",
"id": "bot-id"
},
"inputMode": "Text"
}
As you can see, no requestAttributes.
What's weird to me is that when I do the same thing for LexV1 bot (same facebook page/messenger app), I get these fields, e.g.
{
"requestAttributes":
{
"x-amz-lex:facebook-page-id": "page-id",
"x-amz-lex:channel-id": "channel-id",
"x-amz-lex:webhook-endpoint-url": "webhook-endpoint",
"x-amz-lex:accept-content-types": "PlainText",
"x-amz-lex:user-id": "user-id",
"x-amz-lex:channel-name": "channel-name",
"x-amz-lex:channel-type": "Facebook"
}
}
If anyone has any tips (other than "switch to V1") I would be very grateful. Thanks :))
Notes:
Changed Python Nones to nulls to easier format JSON
Replaced IDs and names from the logs with generic-id-slash-names

Amazon Lex Integration into AWS Lambda

I have been working with Alexa Skills Kit for some time now with my code deployed in AWS Lambda written in Node Js. Now i want to integrate chatbots into it via the amazon Lex service. So that i am able to control my device using both Amazon Alexa and Amazon lex. My question was that if i use the same intent and slots name in Amazon Lex as i have did in my Alexa Skill would the AWS Lambda code just work out of the box? Or would i have to modify the AWS Lambda code to accommodate the AWS Lex?
You will have to accommodate for the differences between Lex and Alexa. The most notable differences are the request and response formats.
Notable differences to be aware of:
Major differences between the formats and passing of sessionAttribtues and slots.
Lex has 4 built-in slotTypes that Alexa does not use (yet?): AMAZON.EmailAddress,, AMAZON.Percentage, AMAZON.PhoneNumber, AMAZON.SpeedUnit, and AMAZON.WeightUnit. (Reference.)
Lex always passes the full user input through inputTranscript. Alexa does not.
Alexa provides resolutions for slot values but fills the actual slot value with the raw data extracted from the input.
Lex will automatically resolve a slot value if you have synonyms set for that slotType.
After working with both of them quite a lot, and often dealing with this, I much prefer Lex to Alexa. I have found Lex to be simpler and provides more developer freedom and control, even though you do have to conform to the restrictions of each of Lex's output channels.
Compare Request / Response Formats:
Alexa JSON format
Lex JSON format
Example Alexa Request:
{
"version": "1.0",
"session": {
"new": true,
"sessionId": "amzn1.echo-api.session.[unique-value-here]",
"application": {
"applicationId": "amzn1.ask.skill.[unique-value-here]"
},
"attributes": {
"key": "string value"
},
"user": {
"userId": "amzn1.ask.account.[unique-value-here]",
"accessToken": "Atza|AAAAAAAA...",
"permissions": {
"consentToken": "ZZZZZZZ..."
}
}
},
"context": {
"System": {
"device": {
"deviceId": "string",
"supportedInterfaces": {
"AudioPlayer": {}
}
},
"application": {
"applicationId": "amzn1.ask.skill.[unique-value-here]"
},
"user": {
"userId": "amzn1.ask.account.[unique-value-here]",
"accessToken": "Atza|AAAAAAAA...",
"permissions": {
"consentToken": "ZZZZZZZ..."
}
},
"apiEndpoint": "https://api.amazonalexa.com",
"apiAccessToken": "AxThk..."
},
"AudioPlayer": {
"playerActivity": "PLAYING",
"token": "audioplayer-token",
"offsetInMilliseconds": 0
}
},
"request": {}
}
Example Lex Request:
{
"currentIntent": {
"name": "intent-name",
"slots": {
"slot name": "value",
"slot name": "value"
},
"slotDetails": {
"slot name": {
"resolutions" : [
{ "value": "resolved value" },
{ "value": "resolved value" }
],
"originalValue": "original text"
},
"slot name": {
"resolutions" : [
{ "value": "resolved value" },
{ "value": "resolved value" }
],
"originalValue": "original text"
}
},
"confirmationStatus": "None, Confirmed, or Denied (intent confirmation, if configured)"
},
"bot": {
"name": "bot name",
"alias": "bot alias",
"version": "bot version"
},
"userId": "User ID specified in the POST request to Amazon Lex.",
"inputTranscript": "Text used to process the request",
"invocationSource": "FulfillmentCodeHook or DialogCodeHook",
"outputDialogMode": "Text or Voice, based on ContentType request header in runtime API request",
"messageVersion": "1.0",
"sessionAttributes": {
"key": "value",
"key": "value"
},
"requestAttributes": {
"key": "value",
"key": "value"
}
}

Autodesk BIM360: Folder Creation returns 400 Bad request

I called the POST-Folder Method via Postman with a JSON body according to this example. But I only receive the message "400 Bad Request" without any explanations. This is what my request looks like:
The service adress:
https://developer.api.autodesk.com/data/v1/projects/b.5823d0b2-0000-0000-00/commands
The HTTP-header
Authorization: Bearer 2_legged_token
Content-Type: application/vnd.api+json
The JSON-Body
{
"jsonapi": {
"version": "1.0"
},
"data": {
"type": "commands",
"attributes": {
"extension": {
"type": "commands:autodesk.core:CreateFolder",
"version": "1.0.0",
"data": {
"requiredAction": "create"
}
}
},
"relationships": {
"resources": {
"data": [
{
"type": "folders",
"id": "1"
}
]
}
}
},
"included": [
{
"type": "folders",
"id": "1",
"attributes": {
"name": "test",
"extension": {
"type": "folders:autodesk.bim360:Folder",
"version": "1.0.0"
}
},
"relationships": {
"parent": {
"data": {
"type": "folders",
"id": "urn:adsk.wipprod:fs.folder:co.Ai*****"
}
}
}
}
]
}
The response
{
"jsonapi": {
"version": "1.0"
},
"errors": [
{
"id": "f1266e76-a37e-400b-bff6-de84b11cdb00",
"status": "400",
"detail": "BadRequest"
}
]
}
What I have found out so far:
The project id is right. When I take a wrong project id I receive a different error.
The Json is also valid.
When I take a (surely) wrong parent-folder-urn I'll receive the same error message. So maybe this is a wrong urn format or something?
As of now, you can create a BIM 360 Docs Folder with commands endpoints, as you pointed out. For that you can use:
3-legged token
2-legged token with x-user-id header, this should contain the Autodesk User ID obtained, for instance, from GET users#me endpoint
"pure" 2-legged token will return bad request (as of August/2017)
Sorry about the documentation, the endpoint to create BIM 360 Docs folder via Commands was released a couple weeks back and we're just finishing to write the documentation.