Amazon API Gateway swagger importer tool does not import minItems feild from swagger - amazon-web-services

I am trying the api gateway validation example from here https://github.com/rpgreen/apigateway-validation-demo . I observed that from the given swagger.json file, minItems is not imported into the models which got created during the swagger import.
"CreateOrders": {
"title": "Create Orders Schema",
"type": "array",
"minItems" : 1,
"items": {
"type": "object",
"$ref" : "#/definitions/Order"
}
}
Because of this when you give an empty array [ ] as input, instead of throwing an error about minimum items in an array, the api responds with a message 'created orders successfully'.
When I manually add the same from the API gateway console UI, it seems to work as expected. Am i missing something or this is a bug in the importer?

This is a known issue with the Swagger import feature of API Gateway.
From http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html
The maxItems and minItems tags are not included in simple request validation. To work around this, update the model after import before doing validation.

Related

How to send a http patch method for a Google Cloud Deployment Manager resource using a python template

I'm creating a HA VPN using Google Cloud Deployment Manager using the following guide:
https://cloud.google.com/network-connectivity/docs/vpn/how-to/creating-ha-vpn#api_4
As part of the guide there is a requirement to send a Patch to the existing cloud router already created, however I haven't been able to find a way to set a patch request in my python template.
The resource is currently setup as below in my python template:
resources.extend([
{
# Cloud Router resource for HA VPN.
'name': 'cloud_router',
# https://cloud.google.com/compute/docs/reference/rest/v1/routers
'type': 'gcp-types/compute-v1:routers',
'properties':
{
'router': cloud_router,
'name': cloud_router,
'project': project_id,
'network': network,
'region': context.properties['region'],
'interfaces': [{
"name": f"{cloud_router}-bgp-int-0",
"linkedVpnTunnel": "vpn_tunnel",
"ipRange":
context.properties[f"bgp_ip_0"]+context.properties[f"subnet_mask_0"]
}],
},
'metadata': {
'dependsOn': [
f"{vpn_tunnel}0",
f"{vpn_tunnel}1",
cloud_router,
]
}
}
}
)]
The rest of the resources (vpn_tunnel, vpnGateway, ExternalVPNGateway, cloud router) all create fine as a post request on the Deployment Manager console.
The error I receive is related to the "linkedVPNTunnel" value which is the name of the VPNTunnel used as per the How to guide. If I remove this field the resource is recreated via the POST request, however the bgp peer isn't associated to the tunnel as required because of the missing field.
code: RESOURCE_ERROR
location: /deployments/ha-vpn-test/resources/cr-bgp-int
message: "{"ResourceType":"gcp-types/compute-v1:routers","ResourceErrorCode"
:"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global"
,"message":"Invalid value for field 'resource.interfaces[0].linkedVpnTunnel':
\ 'vpn-tunnel-0'. The URL is malformed.","reason":"invalid"}],"message"
:"Invalid value for field 'resource.interfaces[0].linkedVpnTunnel': 'vpn-tunnel-0'.
\ The URL is malformed.","statusMessage":"Bad Request","requestPath":"
https://compute.googleapis.com/compute/v1/projects/dev-test/regions/asia-southeast1/routers\"\
,"httpMethod":"POST"}}"
Found the problem.
The methods listed on the API site can be appended directly to the end of the 'type' field or alternatively the 'action' field can be used but isn't recommended.
This allowed me to send a http PACT request:
'type': 'gcp-types/compute-v1:compute.routers.patch'
Previously I had the below which resulted in a POST:
'type': 'gcp-types/compute-v1:routers'

Get proper usernames to populate on Superset with Azure SSO instead of ID string

I've finally gotten Azure Single Sign-On (SSO) connected to Apache Superset running via docker-compose, following the Flask docs. Users in my company's Azure group can create and access Superset accounts by logging in with Azure and they are assigned roles based on their identity. This is good.
The usernames they get assigned, however, are long Azure ID strings. These are undesirable in displays. Here's what my account looks like on the List Users screen and on my profile:
How can I modify either my Azure application SSO setup or my Superset config to have Superset populate usernames like SFirke for the account usernames, instead of values like 3ee660ff-a274 ... ?
The security part of my config.py looks like this, almost identical to the Flask template:
OAUTH_PROVIDERS = [
{
"name": "azure",
"icon": "fa-windows",
"token_key": "access_token",
"remote_app": {
"client_id": "CLIENT_ID",
"client_secret": "CLIENT_SECRET",
"api_base_url": "https://login.microsoftonline.com/TENANT_ID/oauth2",
"client_kwargs": {
"scope": "User.read name preferred_username email profile upn groups",
"resource": "RESOURCE_ID",
},
"request_token_url": None,
"access_token_url": "https://login.microsoftonline.com/TENANT_ID/oauth2/token",
"authorize_url": "https://login.microsoftonline.com/TENANT_ID/oauth2/authorize",
},
},
]
EDIT: Looks like the way to go is writing a custom userinfo retrieval method, there's a template on the Flask page linked above and an example used for Superset in this Github comment. I think I would use a line like "id": me["preferred_username"] or "id": me["upn"], based on the field names in the Microsoft docs.
But Microsoft notes that this value can change over time and should not be used for authorization changes. Since the oid value is immutable, and it is hardly visible to the typical user, I plan to just stick to it.

How to import knowledge base through api?

https://cloud.google.com/dialogflow/es/docs/reference/rest/v2beta1/projects.knowledgeBases.documents/import
Consider I'm having an csv file to be imported in a cloud storage, How exactly do I execute this above API request and import the knowledge base qna's, I've added the documentation link above, I'm getting the below error too
Change the parent to projects/your-project-id/knowledgeBases/xxxxxx and import should accept it.
But I suggest to use projects.knowledgeBases.documents.create if you are planning to create a knowledge base from scratch using a CSV file. See sample request via projects.knowledgeBases.documents.create:
parent: projects/your-project-id/knowledgeBases/xxxxx
importGcsCustomMetadata: false
Request Body:
{
"contentUri": "gs://my-bucket/faq.csv",
"displayName": "test_csv",
"knowledgeTypes": [
"FAQ"
],
"mimeType": "text/csv"
}
Returns HTTP 200:
{
"name": "projects/your-project-id/locations/us/operations/document-create-20210829-21261630297603-6127fbb9-0000-21dc-bec9-240588717654"
}
Created knowledge base in Dialogflow console:

Log entries api not retrieving log entries

I am trying to retrieve custom logs for a particular project in google-cloud. I am using this api:
https://logging.googleapis.com/v2/entries:list
as per the example given in this link.
The below is the payload:
{
"filter": "projects/projectA/logs/slow_log",
"resourceNames": [
"projects/projectA"
]
}
There is a custom log based metric called slow_log I created in that projectA, which gathers query logs from cloud-SQL database in that project. I also generated data before calling this api. I am able to see the data in stack-driver console, but unable to get it from the rest call.
Every time I run this api, I only get this response and nothing else:
"nextPageToken": "EAA4suKu3qnLwbtrSg8iDSIDCgEAKgYIgL7q8wVSBwibvMSMvhhglPDiiJzdjt_zAWocCgwI2buKhAYQlvTd2gESCAgLEMPV7ukCGAAgAQ"
Is there anything missing here?
How is it possible to pass time range in this query?
Update
Changed the request as per the comment below as gave the full path of the logs: still only the token is displayed
{
"filter": "projects/projectA/logs/cloudsql.googleapis.com%2Fmysql-slow.log",
"projectIds": [
"projectA"
],
"orderBy": "timestamp desc"
}
Also I give this command from command line:
gcloud logging read logName="projects/projectA/logs/cloudsql.googleapis.com%2Fmysql-slow.log"
then it fetches the logs in command line, so I am not sure what I am missing in the api explorer and postman where I get only nextpage token.
resourceNames, filter and orderBy are mandatory, try like this:
{
"resourceNames": [
"projects/projectA"
],
"filter": "projects/projectA/logs/cloudsql.googleapis.com%2Fmysql-slow.log",
"orderBy": "timestamp desc"
}

Facebook graph api. how to list all pages that my app is subscribed to?

So I am aware of how to check each page to get a list of all subscribed apps.
But I would like to get a list of all pages my app has real time update subscriptions for?
so i have tried this
https://graph.facebook.com/v2.5/$app_id/subscriptions?access_token=$app_token
but this just brings back basic info on the app.
I would like a list of pages that it has subscriptions to already?
Can anyone help?
This doesn't look to be possible.
Endpoint you're using - https://developers.facebook.com/docs/graph-api/reference/v2.8/app/subscriptions - is to get list of application webhooks (callback_url and type of changes), which are called subscriptions for some reason. It's not about pages that subscribed to this webhook (or this app in general).
Overall, even in https://developers.facebook.com/apps/ for your app, under e.g. Messenger tab, you will only see subset of all pages that subscribed to this app. The visible subset is limited by your facebook user account permissions, presumably only show pages that you're either Admin or Editor.
Therefore, if such call would be possible, it would be somehow tied to User Access Token as well, not only app token.
You can do this here: https://developers.facebook.com/tools/explorer
Once logged, you can click on right button and select "Get User Access Token". You will need at least manage_pages or pages_show_list permission to accomplish this.
Now, all you have to do is call this endpoint: /me/accounts.
It should list all subscribed pages on your app.
Hope it helps.
As per
https://developers.facebook.com/docs/facebook-login/access-tokens/#apptokens
it is possible (graph api):
GET /oauth/access_token
?client_id={app-id}
&client_secret={app-secret}
&grant_type=client_credentials
And then /<app_id>/subscriptions
which returns something like:
{
"data": [
{
"object": "application",
"callback_url": "https:...",
"active": true,
"fields": [
{
"name": "ads_rules_engine",
"version": "v2.9"
}
]
},
{
"object": "page",
"callback_url": "https://...",
"active": true,
"fields": [
{
"name": "leadgen",
"version": "v2.5"
}
]
}
]
}