Using the REST-API with the getting started guide returns 500 error - loopbackjs

Following the instructions at https://fabric-composer.github.io/start/getting-started-rest-api.html, testing the generated api with
curl -X GET --header "Accept: application/json" "http://0.0.0.0:3000/api/net.biz.digitalPropertyNetwork.LandTitle"
generates the following error
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "No registered namespace for type net_biz_digitalPropertyNetwork_LandTitle",
"stack": "Error: No registered namespace for type net_biz_digitalPropertyNetwork_LandTitle\n at ModelManager.getType (/Users/matt/Documents/workspaces/blockchain/src/github.com/fabric-composer/sample-applications/node_modules/composer-common/lib/modelmanager.js:265:23)\n at ensureConnected.then (/Users/matt/Documents/workspaces/blockchain/src/github.com/fabric-composer/sample-applications/node_modules/composer-loopback-connector/lib/businessnetworkconnector.js:198:53)\n at process._tickDomainCallback (internal/process/next_tick.js:129:7)"
}
}
I see that the boot script swaps dots for underscores, is this needed somewhere else too, perhaps?
// this is required because LoopBack doesn't like dots in model schema names
modelSchema.name = modelSchema.plural.replace(/\./g, '_');
For reference, here are the node dependencies of my loopback package
"dependencies": {
"composer-loopback-connector": "^0.4.0",
"compression": "^1.0.3",
"cors": "^2.5.2",
"helmet": "^1.3.0",
"loopback": "^2.22.0",
"loopback-boot": "^2.6.5",
"loopback-component-explorer": "^2.4.0",
"loopback-connector-composer": "^0.4.1",
"loopback-datasource-juggler": "^2.39.0",
"serve-favicon": "^2.0.1",
"strong-error-handler": "^1.0.1"
}

Yes, you're right about the boot script swapping the dots for underscores. This is due to loopback not accepting dots in model names so the boot script swaps them out and the connector takes account of that when it gets called by the loopback app.
This was a problem with the original connector which is in npm as composer-connector-loopback. That version hasn't been removed yet but should be soon.
It has since been renamed to bring it in line with other loopback connectors and is also in npm as loopback-connector-composer.
I'll raise an issue to get the getting-started guide that you mentioned updated and if you can switch over to using the other connector then that should solve the problem. HTH.

Related

AWS amplify running datastore with type/javascript freeze in the console

I am using the aws-amplify API to manipulate the datastore. Everything was going fine and all queries were running successfully until it stopped to work suddenly.
In my case i am building an NPM package to wrap aws-amplify functionality with node and typescript. And another developer is using the package to build a native app with react-native.
So when i implement new functions i test it locally with ts-node, something like DataStore.query or DataStore.Save ...etc, and the other developer is testing with expo after install the last package release i have doen.
Once we had a problem saying:
[WARN] 04:14.549 DataStore, Object {
"cause": Object {
"error": Object {
"errors": Array [
Object {
"message": "Connection failed: {\"errors\":{\"errorType\":\"MaxSubscriptionsReachedError\",\"message\":\"Max number of 100 subscriptions reached\"}}",
},
],
},
When it's happened, I tried to run queries locally and it work good with a warning:
[WARN] 33:35.743 DataStore - Realtime disabled when in a server-side environment
So we thought it is cache problem or something. But now nothing works at all in the dataStore. If i am trying to run code locally with ts-node, the console freeze and never comeback.
For example if i do:
the console will freeze with the warning message:
We tried to fix appSync and subscriptions but it is not working at all.
Cognito user pool works good, S3 also good, only datastore is sad :(
// How we configure amplify
this.awsExports = Amplify.configure({ ...awsConfig });
// How we import Datastore
import {DataStore} from "aws-amplify/";
// Our dependencies
"dependencies": {
"#aws-amplify/core": "^4.6.0",
"#aws-amplify/datastore": "^3.12.4",
"#react-native-async-storage/async-storage": "^1.17.4",
"#react-native-community/netinfo": "^8.3.0",
"#types/amplify": "^1.1.25",
"algoliasearch": "^4.14.1",
"aws-amplify": "^4.3.29",
"aws-amplify-react-native": "^6.0.5",
"aws-sdk": "^2.1142.0",
"aws-sdk-mock": "^5.7.0",
"eslint-plugin-jsdoc": "^39.2.9",
"mustache": "^4.2.0"
}
Please any one can help?

"Host header is specified and is not an IP address or localhost" message when using chromedp headless-shell

I'm trying to deploy chromedp/headless-shell to Cloud Run.
Here is my Dockerfile:
FROM chromedp/headless-shell
ENTRYPOINT [ "/headless-shell/headless-shell", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222", "--disable-gpu", "--headless", "--no-sandbox" ]
The command I used to deploy to Cloud Run is
gcloud run deploy chromedp-headless-shell --source . --port 9222
Problem
When I go to this path /json/list, I expect to see something like this
[{
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E",
"id": "B06F36A73E5F33A515E87C6AE4E2284E",
"title": "about:blank",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E"
}]
but instead, I get this error:
Host header is specified and is not an IP address or localhost.
Is there something wrong with my configuration or is Cloud Run not the ideal choice for deploying this?
This specific issue is not unique to Cloud Run. It originates from an existing change in the Chrome DevTools Protocol which generates this error when accessing it remotely. It could be attributed to security measures against some types of attacks. You can see the related Chromium pull request here.
I deployed a chromedp/headless-shell container to Cloud Run using your configuration and also received the same error. Now, there is this useful comment in a GitHub issue showing a workaround for this problem, by passing a HOST:localhost header. While this does work when I tested it locally, it does not work on Cloud Run (returns a 404 error). This 404 error could be due to how Cloud Run also utilizes the HOST header to route requests to the correct service.
Unfortunately this answer is not a solution, but it sheds some light on what you are seeing and why. I would go for using a different service from GCP, such a GCE that are pure virtual machines and less managed.

Failed to convert server response to JSON | gcloud.services.operations.describe

I'm new to google cloud services. I was going through some tutorials and I had to run the following command in order to describe an operation.
$ gcloud services operations describe operations/acf.xxxx
however, this command has failed with the error stating:
ERROR: (gcloud.services.operations.describe) INTERNAL: Failed to convert server response to JSON
I'm performing these operations in windows PowerShell using bash commands. Is there any solution to resolve this?
Possibly a bug.
I get the same error on Linux with gcloud v.291.0.0.
You may wish to report this issue at Google's issuetracker
A useful feature of gcloud is that you can append any command with --log-http to see the underlying REST API calls which is often (not really in this case) more illuminating of the error.
This yields (for me):
uri: https://serviceconsumermanagement.googleapis.com/v1beta1/operations/${OPERATION_ID}?alt=json
method: GET
...
{
"error": {
"code": 500,
"message": "Failed to convert server response to JSON",
"status": "INTERNAL"
}
}
Another excellent (debugging) tool is APIs Explorer that supports all Google's REST endpoints. This is accessible from API documentation:
https://cloud.google.com/service-infrastructure/docs/service-consumer-management/reference/rest/v1beta1/operations/get
If you complete the APIs Explorer (form) on the righthand side, I suspect, you'll receive the same error.
The approaches appear to confirm that the issue is Google-side.

aws pinpoint update-apns-sandbox-channel command results in: missing credentials

aws --version
aws-cli/1.16.76 Python/2.7.10 Darwin/16.7.0 botocore/1.12.66
I'm trying to programmatically add an APNS_SANDBOX channel to a pinpoint app. I'm able to do this successfully via the pinpoint console, but not with aws cli or a lambda function which is the end goal. Changes to our Test/Prod environments can only be made via the CodePipeline, but for testing purposes I'm trying to achieve this with the aws cli.
I've tried both aws cli (using the root credentials) and a lambda function -- both result in the following error:
An error occurred (BadRequestException) when calling the UpdateApnsSandboxChannel operation: Missing credentials
I have tried setting the Certificate field in the UpdateApnsSandboxChannel json object as the path to the .p12 certificate file as well as using a string value retrieved from the openssl tool.
Today I worked with someone from aws support, and they were not able to figure out the issue after trying to debug for a couple of hours. They said they would send an email to the pinpoint team, but they did not have an ETA on when they might respond.
Thanks
I ended up getting this to work successfully -- This is why it was failing:
I was originally making the cli call with the following request object as this is what is including in the documentation:
aws pinpoint update-apns-sandbox-channel --application-id [physicalID] --cli-input-json file:///path-to-requestObject.json
{
"APNSSandboxChannelRequest": {
"BundleId": "com.bundleId.value",
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING",
"TeamId": "",
"TokenKey": "",
"TokenKeyId": ""
},
"ApplicationId": "Pinpoint_PhysicalId"
}
After playing around with it some more I got it to work by removing BundleId, TeamId, TokenKey, and TokenKeyId. I believe these fields are needed when using a p8 certificate.
{
"APNSSandboxChannelRequest": {
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING"
},
"ApplicationId": "Pinpoint_PhysicalId"
}

AWS API Gateway : Execution failed due to configuration error: No match for output mapping and no default output mapping configured

In AWS API Gateway, I have a GET method that invokes a lambda function.
When I test the method in the API Gateway dashboard, the lambda function executes successfully but API Gateway is not mapping the context.success() call to a 200 result despite having default mapping set to yes.
Instead I get this error:
Execution failed due to configuration error: No match for output mapping and no default output mapping configured
This is my Integration Response setup:
And this is my method response setup:
Basically I would expect the API Gateway to recognize the successful lambda execution and then map it by default to a 200 response but
that doesn't happen.
Does anyone know why this isn't working?
I have same issue while uploading api using serverless framework. You can simply follow bellow steps which resolve my issue.
1- Navigate to aws api gateway
2- find your api and click on method(Post, Get, Any, etc)
3- click on method response
4- Add method with 200 response.
5- Save it & test
I had the similar issue, got it resolved by adding the method response 200
This is a 'check-the-basics' type of answer. In my scenario, CORS and the bug mentioned above were not at issue. However, the error message given in the title is exactly what I saw and what led me to this thread.
Instead, (an an API Gateway newb) I had failed to redeploy the deployment. Once I did that, everything worked.
As a bonus, for Terraform 0.12 users, the magic you need and should use is a triggers parameter for your aws_api_gateway_deployment resource. This will automatically redeploy for you when other related APIGW resources change. See the TF documentation for details.
There was an issue when saving the default integration response mapping which has been resolved. The bug caused requests to API methods that were saved incorrectly to return a 500 error, the CloudWatch logs should contain:
Execution failed due to configuration error:
No match for output mapping and no default output mapping configured.
Since the 'ENABLE CORS' saves the default integration response, this issue also appeared in your scenario.
For more information, please refer to the AWS forums entry: https://forums.aws.amazon.com/thread.jspa?threadID=221197&tstart=0
Best,
Jurgen
What worked for me:
1. In Api Gateway Console created OPTIONS method manually
2. In the Method Response section under created OPTIONS method added 200 OK
3. Selected Option method and enabled CORS from menu
I found the problem:
Amazon had added a new button in the API-Gateway resource configuration
titled 'Enable CORS'. I had earlier clicked this however once enabled
there doesn't seem to be a way to disable it
Enabling CORS using this button (Instead of doing it manually which is what I ended up doing) seems to cause an internal server error even on a
successful lambda execution.
SOLUTION: I deleted the resource and created it again without clicking
on 'Enable CORS' this time and everything worked fine.
This seems to be a BUG with that feature but perhaps I just don't
understand it well enough. Comment if you have any further information.
Thanks.
Check the box which says "Use Lambda Proxy integration".
This works fine for me. For reference, my lambda function looks like this...
def lambda_handler(event, context:
# Get params
param1 = event['queryStringParameters']['param1']
param2 = event['queryStringParameters']['param2']
# Do some stuff with params to get body
body = some_fn(param1, param2)
# Return response object
response_object = {}
response_object['statusCode'] = 200
response_object['headers'] = {}
response_object['headers']['Content-Type']='application/json'
response_object['body'] = json.dumps(body)
return response_object
I just drop this here because I faced the same issue today and in my case was that we are appending at the end of the endpoint a /. So for example, if this is the definition:
{
"openapi": "3.0.1",
"info": {
"title": "some api",
"version": "2021-04-23T23:59:37Z"
},
"servers": [
{
"url": "https://api-gw.domain.com"
}
],
"paths": {
"/api/{version}/intelligence/topic": {
"get": {
"parameters": [
{
"name": "username",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "version",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "x-api-key",
"in": "header",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "X-AWS-Cognito-Group-ID",
"in": "header",
"schema": {
"type": "string"
}
}
],
...
Remove any / at the end of the endpoint: /api/{version}/intelligence/topic. Also do the same in uri on apigateway-integration section of the swagger + api gw extensions json.
Make sure your ARN for the role is indeed a role (and not, e.g., the policy).