I'm trying to set up a yaml file for the Gateway API to post to a cloud function, but I don't know how to do that, I searched the internet and found some examples but when I created the gateway with the settings of my YAML file I get the following error:
I know the cause of the error is probably a YAML file indentation error, but I'm not able to resolve it.
My YAML file is configured as follows:
swagger: '2.0'
info:
title: gateway-homologation gateway for homologation of the project cycle
description: "Send a deal object for the data to be treated"
version: "1.0.0"
schemes:
- https
produces:
- application/json
paths:
/dispatcher:
post:
x-google-backend:
address: https://url
description: "Jailson esteve aqui =)"
operationId: "dispatcher"
parameters:
type: object
properties:
request_type:
type: string
deal:
name:
type: string
responses:
200:
description: "#OK"
schema:
type: string
400:
description: "#OPS"
Another question is how can I configure what my gateway will send to my cloud function?
" -The error message is quite clear that at /paths/~1dispatcher/post/parameters, an array is expected but an object is given. This is not a YAML error, but a schema error – you have to give the structure defined by the schema. Making the value of parameters an array will overcome that error, but I do not know enough about Swagger to be confident that this is the only error in your code.
-I understood what you said but I'm not able to do this. How would I turn the object into an array in practice?
The docs Describing Parameters have some examples; - starts a sequence item; sequence items make up a YAML sequence and in JavaScript context will be loaded as an array.
#flyx
Related
I have a Dapr application running locally, self-hosted with the Dapr cli. I've configured a Dapr Component and Subscription for subscribing to an Azure Event Hub, detailed below:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: eventhubs-pubsub
spec:
type: pubsub.azure.eventhubs
version: v1
metadata:
- name: connectionString
value: "Endpoint=sb://[removed].servicebus.windows.net/;SharedAccessKeyName=[removed];SharedAccessKey=[removed];EntityPath=myhub"
- name: enableEntityManagement
value: "false"
- name: storageAccountName
value: "[removed]"
- name: storageAccountKey
value: "[removed]"
- name: storageContainerName
value: "myapp"
scopes:
- myapp
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: myhub-subscription
spec:
topic: myhub
route: /EventHubsInput
pubsubname: eventhubs-pubsub
scopes:
- myapp
I've manually created a consumer group with the name of the Dapr app id - "myapp".
I've called the HTTP endpoint directly - a POST verb returning 200 - and it works fine. It also responds to OPTIONS verb.
The application starts succsesfully with no errors or warnings. I can see a logged message saying:
INFO[0000] connectionString provided is specific to event hub "myhub". Publishing or subscribing to a topic that does not match this event hub will fail when attempted. app_id=myapp instance=OldManWaterfall scope=dapr.contrib type=log ver=1.6.0
INFO[0000] component loaded. name: eventhubs-pubsub, type: pubsub.azure.eventhubs/v1 app_id=myapp instance=OldManWaterfall scope=dapr.runtime type=log ver=1.6.0
No other message is logged regarding the pubsub and no message indicating a failure or success of the subscription itself. Nothing is created in the storgae container. If I remove the storage related config from the Component no failure is reported, despite those properties being mandatory. When I put a message on the Hub, unsurprisingly nothing happens.
What am I doing wrong? Everything I've read seems to indicate this set up should work.
I was able to fix this by exposing my app over http instead of https. Unfortunately there was no logging to indicate https was the issue, even with debug level switched on.
I collect the data using puppeteer and chrome-aws-lambda. I plan to push it to AWS Lambda but while testing locally I get an error:
Error: Protocol error (Runtime.callFunctionOn): Target closed.
when I call for waitForSelector.
I've some posts that mentioned there is a chance that chrome process gets too little memory within the docker. The question is: how to get it more memory? I also read that disable-dev-shm-usage may help, but it doesn't. That's how I do it now (the last line is where error happens):
const chromium = require('chrome-aws-lambda');
browser = await chromium.puppeteer.launch({
args: [...chromium.args, `--proxy-server=${proxyUrl}`, '--disable-dev-shm-usage'],
defaultViewport: chromium.defaultViewport,
executablePath: await chromium.executablePath,
headless: chromium.headless,
ignoreHTTPSErrors: true,
});
const page = await browser.newPage();
await page.authenticate({ username, password });
await page.goto(MY_URL, { waitUntil: 'domcontentloaded' })
await page.click(SUBMIT_SELECTOR);
await page.waitForSelector('#myDiv')
.then(() => console.log('got it')).
catch((e)=>console.log('Error happens: '+ e));
UPDATE: more info on local setup:
I run it locally using sam local start-api.
Here is the content of my template.yaml (just a slightly updated hello-world template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
samnode
Sample SAM Template for samnode
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 60
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs14.x
MemorySize: 4096
Layers:
- !Sub 'arn:aws:lambda:${AWS::Region}:764866452798:layer:chrome-aws-lambda:22'
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
MemorySize: 4096
You have already configured 4GB memory for the Lambda and it should be more than enough to load couple of pages. If you still feel this is the issue, you can increase the memory upto 10240. I suspect the error may not be related to memory.
To verify, you can do the following to see if the Lambda is actually getting the specified memory.
Run the lambda in Eager mode (This keeps the lambda running on local even if there are no active requests)
sam local start-api --warm-containers EAGER
Now run the following command to track the memory consumption
docker stats
You can send a request to your local api now and track the memory consumption.
If you see less than 4GB memory allocated to your lambda function, then update the Docker resources and ensure you allocate appropriate memory to Docker.
Update Docker Resources (Increase memory)
Windows
Mac
Try out different versions of chrome-aws-lambda (may be using a local layer with SAM).
I would also run the same block of code on local using Puppeteer by disabling the headless mode and verify the selector the code is waiting for is actually available.
Install the puppeteer dependency.
Update the code to use puppeteer instead of chrome-aws-lambda
const puppeteer = require('puppeteer');
Disable headless mode
browser = await puppeteer.launch({headless: false});
Now run the file with node <replace-with-your-file-name.js> e.g. if the file name is somejsfile.js then the command would be node somefile.js
Hope this helps you proceed further.
I started looking into using AWS HTTP API as a single point of entry to some micro services running with ECS.
One micro service has the following route internally on the server:
/sessions/{session_id}/topics
I define exactly the same route in my HTTP API and use CloudMap and a VPC Link to reach my ECS cluster. So far so good, the requests can reach the servers. The path is however not the same when it arrives. As per AWS documentation [1] it will prepend the stage name so that the request looks the following when it arrives:
/{stage_name}/sessions/{session_id}/topics
So I started to look into Parameter mappings so that I can change the path for the integration, but I cannot get it to work.
For requestParameters I want overwrite the path like below, but for some reason the original path with the stage variable is still there. If I just define overwrite:path as $request.path.sessionId I get only the ID as the path or if I write whatever string I want it will arrive as I define it. But when I mix the $request.path.sessionId and the other parts of the string it does not seem to work.
How do I format this correctly?
paths:
/sessions/{sessionId}/topics:
post:
responses:
default:
description: "Default response for POST /sessions/{sessionId}/topics"
x-amazon-apigateway-integration:
requestParameters:
overwrite:path: "/sessions/$request.path.sessionId/topics"
payloadFormatVersion: "1.0"
connectionId: (removed)
type: "http_proxy"
httpMethod: "POST"
uri: (removed)
connectionType: "VPC_LINK"
timeoutInMillis: 30000
[1] https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-private.html
You can try to use parentheses. Formal notation instead of shorthand notation.
overwrite:path: "/sessions/${request.path.sessionId}/topics"
It worked well for me for complex mappings.
mapping template is a script expressed in Velocity Template Language (VTL)
dont remove the uri and the connectionId and it will work for you.
add only requestParameters:
overwrite:path: "/sessions/$request.path.sessionId/topics"
I am attempting to limit traffic by request size using istio. Given that the virtual service does not provide this I am trying to due it via a mixer policy.
I setup the following
---
apiVersion: "config.istio.io/v1alpha2"
kind: handler
metadata:
name: denylargerequest
spec:
compiledAdapter: denier
params:
status:
code: 9
message: Request Too Large
---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: denylargerequest
spec:
compiledTemplate: checknothing
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denylargerequest
spec:
match: destination.labels["app"] == "httpbin" && request.size > 100
actions:
- handler: denylargerequest
instances: [ denylargerequest ]
Requests are not denied and I see the following error from istio-mixer
2020-01-07T15:42:40.564240Z warn input set condition evaluation error: id='2', error='lookup failed: 'request.size''
If I remove the request.size portion of the match I get the expected behavior which is a 400 http status with a message about request size. Of course, I get it on every request which is not desired. But that, along with the above error makes it clear that the request.size attribute is the problem.
I do not see anywhere in istio's docs what attributes are available to the mixer rules.
I am running istio 1.3.0.
Any suggestions on the mixer rule? Or an alternative way to enforce request size limits via istio?
The rule match mentioned in the question:
match: destination.labels["app"] == "httpbin" && request.size > 100
will not work because of mismatched attribute types.
According to istio documentation:
Match is an attribute based predicate. When Mixer receives a request
it evaluates the match expression and executes all the associated
actions if the match evaluates to true.
A few example match:
an empty match evaluates to true
true, a boolean literal; a rule with this match will always be executed
match(destination.service.host, "ratings.*") selects any request targeting a service whose name starts with “ratings”
attr1 == "20" && attr2 == "30" logical AND, OR, and NOT are also available
This means that the request.size > 100 has integer values that are not supported.
However, it is possible to do with help of Common Expression Language (CEL).
You can enable CEL in istio by using the policy.istio.io/lang annotation (set it to CEL).
Then by using Type Values from the List of Standard Definitions we can use functions to parse values into different types.
Just as a suggesion for solution.
Alternative way would be to use envoyfilter filter like in this github issue.
According to another related github issue about Envoy's per connection buffer limit:
The current resolution is to use envoyfilter, reopen if you feel this is a must feature
Hope this helps.
I want to create two databases in one cloudsql instance.
But if it is written in the following way it will result in an error.
resources:
- name: test-instance
type: sqladmin.v1beta4.instance
properties:
region: us-central
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
settings:
tier: db-f1-micro
- name: test_db1
type: sqladmin.v1beta4.database
properties:
instance: $(ref.test-instance.name)
charset: utf8mb4
collation: utf8mb4_general_ci
- name: test_db2
type: sqladmin.v1beta4.database
properties
instance: $(ref.test-instance.name)
charset: utf8mb4
collation: utf8mb4_general_ci
output:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation
[operation-********]
- code: RESOURCE_ERROR
location: /deployments/sample-deploy/resources/test_db2
message:
'{"ResourceType":"sqladmin.v1beta4.database","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"Operation
failed because another operation was already in progress.","reason":"operationInProgress"}],"message":"Operation
failed because another operation was already in progress.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/****/instances/test-instance/databases","httpMethod":"POST"}}'
Please tell me what to do to resolve the error.
The error “ResourceErrorCode” is an error which originates with the CloudSQL API.
The issue here is that Deployment Manager will try to run all resource modifications in parallel (unless you specify a dependency between resources). Deployment Manager is a declarative configuration, it will run the deployments in parallel either they are independent of each other or not.
In this specific case, CloudSQL is not able to create two databases at the same time. This is why you are seeing the error message: Operation failed because another operation was already in progress.
There can be only one pending operation at a given point of time because of the inherent system architecture. This is a limitation on the concurrent writes to a CloudSQL database.
To resolve this issue, you will have to create the two databases in sequence, not in parallel.
For more information on how to do so, you may consult the documentation on this matter.