API Gateway not importing exported definition - amazon-web-services

I am testing my backup procedure for an API, in my API Gateway.
So, I am exporting my API from the API Console within my AWS account, I then go into API Gateway and create a new API - "import from swagger".
I paste my exported definition in and create, which throws tons of errors.
From my reading - it seems this is a known issue / pain.
I suspect the reason for the error(s) are because I use a custom authorizer;
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
I use this on each method, hence, I get a lot of errors.
The weird thing is that I can clone this API perfectly fine, hence, I assumed that I could take an exported definition and re-import without issues.
Any ideas on how I can correct these errors (preferably within my API gateway, so that I can export / import with no issues).
An example of one of my GET methods using this authorizer is:
"/api/example" : {
"get" : {
"produces" : [ "application/json" ],
"parameters" : [ {
"name" : "Authorization",
"in" : "header",
"required" : true,
"type" : "string"
} ],
"responses" : {
"200" : {
"description" : "200 response",
"schema" : {
"$ref" : "#/definitions/exampleModel"
},
"headers" : {
"Access-Control-Allow-Origin" : {
"type" : "string"
}
}
}
},
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
}
Thanks in advance
UPDATE
The error(s) that I get when importing a definition I had just exported are:
Your API was not imported due to errors in the Swagger file.
Unable to put method 'GET' on resource at path '/api/v1/MethodName': Invalid authorizer ID specified. Setting the authorization type to CUSTOM or COGNITO_USER_POOLS requires a valid authorizer.
I get the message for each method in my API - so there is a lot.
Additionality, right at the end of the message, I get this:
Additionally, these warnings were found:
Unable to create authorizer from security definition: 'TestAuthorizer'. Extension x-amazon-apigateway-authorizer is required. Any methods with security: 'TestAuthorizer' will not be created. If this security definition is not a configured authorizer, remove the x-amazon-apigateway-authtype extension and it will be ignored.
I have tried with Ignoring the errors, same result.

Make sure you are exporting your swagger with both integrations and authorizers extensions.
Try exporting your swagger using AWS CLI:
aws apigateway get-export \
--parameters '{"extensions":"integrations,authorizers"}' \
--rest-api-id {api_id} \
--stage-name {stage_name} \
--export-type swagger swagger.json
The output will be sent to swagger.json file.
For more details about swagger custom extensions see this.

For anyone that may come across this issue.
After LOTS of troubleshooting and eventually involving the AWS Support Team, this has been resolved and identified as an AWS CLI client bug (confirmed from AWS Support Team);
Final response.
Thank you for providing the details requested. After going through the AWS CLI version and error details, I can confirm the error is because of known issue with Powershell AWS CLI. I apologize for inconvenience caused due to the error. To get around the error I recommend going through the following steps
Create a file named data.json in the current directory where the powershell command is to be executed
Save the following contents to file {"extensions":"authorizers,integrations"}
In Powershell console, ensure the current working directory is the same as the location where data.json is present
Execute the following command aws apigateway get-export --parameters file://data.json --rest-api-id APIID --stage-name dev --export-type swagger C:\temp\export.json
Using this, finally resolved my issue - I look forward to the fix in one of the upcoming versions.
PS - this is currently on the latest version:
aws --version
aws-cli/1.11.44 Python/2.7.9 Windows/8 botocore/1.5.7

Related

how to find GCP services interdependencies

Before I can use a particular service, I have to enable the API in my Google project. Sometimes when I do this, more services are enabled than the one I specified. for example, enabling cloudfunctions service actually enabling multiple APIs in the background like pubsub, storage etc.
I followed this article to find the service dependencies https://binx.io/blog/2020/10/03/how-to-find-google-cloud-platform-services-dependencies/. The command output in the above article shows the dependent services as below
gcloud services list \
--available --format json | \
jq --arg service cloudfunctions.googleapis.com \
'map(select(.config.name == $service)|
{
name: .config.name,
dependsOn: .dependencyConfig.dependsOn
}
)'
the result is:
{
"name": "cloudfunctions.googleapis.com",
"dependsOn": [
"cloudfunctions.googleapis.com",
"logging.googleapis.com",
"pubsub.googleapis.com",
"source.googleapis.com",
"storage-api.googleapis.com",
"storage-component.googleapis.com"
]
}
when i am executing this command, i am not getting the output as above. the ""dependsOn":" section is showing as "null"
below is the output of the same command when i executed in cloudshell.
{
"name": "cloudfunctions.googleapis.com",
"dependsOn": null
}
]
does there any alternate way to identify the service interdependencies. can someone help on this please
I do not see the dependencyConfig section output by the command on my system.
When I look at the API documentation, the Service resource does not contain the section dependencyConfig.
It appears that Google has removed that item from the API that the command gcloud services list uses to display service information.

Is it possible to download the contents of a public Lambda layer from AWS given the ARN?

I want to download the public arn for a more compact version of spacy from this GitHub repository.
"arn:aws:lambda:us-west-2:113088814899:layer:Klayers-python37-spacy:27"
How can I achieve this?
You can get it from a Arn using the get-layer-version-by-arn function in the CLI.
You can run the below command to get the source of the Lambda layer you requested.
aws lambda get-layer-version-by-arn \
--arn "arn:aws:lambda:us-west-2:113088814899:layer:Klayers-python37-spacy:27"
An example of the response you will receive is below
{
"LayerVersionArn": "arn:aws:lambda:us-west-2:123456789012:layer:AWSLambda-Python37-SciPy1x:2",
"Description": "AWS Lambda SciPy layer for Python 3.7 (scipy-1.1.0, numpy-1.15.4) https://github.com/scipy/scipy/releases/tag/v1.1.0 https://github.com/numpy/numpy/releases/tag/v1.15.4",
"CreatedDate": "2018-11-12T10:09:38.398+0000",
"LayerArn": "arn:aws:lambda:us-west-2:123456789012:layer:AWSLambda-Python37-SciPy1x",
"Content": {
"CodeSize": 41784542,
"CodeSha256": "GGmv8ocUw4cly0T8HL0Vx/f5V4RmSCGNjDIslY4VskM=",
"Location": "https://awslambda-us-west-2-layers.s3.us-west-2.amazonaws.com/snapshots/123456789012/..."
},
"Version": 2,
"CompatibleRuntimes": [
"python3.7"
],
"LicenseInfo": "SciPy: https://github.com/scipy/scipy/blob/master/LICENSE.txt, NumPy: https://github.com/numpy/numpy/blob/master/LICENSE.txt"
}
Once you run this you will get a response returned with a key of "Content", containing a subkey of "Location" which references the S3 path to download the layer contents.
You can download from this path, you will then need to configure this as a Lambda layer again after removing any dependencies.
Please ensure in this process that you only remove unnecessary dependencies.

AWS Api Gateway Custom Domain Name with undefined stage

I'm trying to set up a Custom Domain Name in AWS API Gateway where callers have to specify explicitly the stage name after any base path name. It is something I did in the past but now it seems that, since AWS updated the console interface, it is no more possible.
The final url should be like:
https://example.com/{basePath}/{stage}/function
I tried using the console, but stage is now a mandatory field (chose from a drop-down).
I tried using AWS CLI, but stage is again a mandatory field
aws: error: the following arguments are required: --stage
I tried using Boto3, following the documentation (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/apigateway.html#APIGateway.Client.create_base_path_mapping) but, even if stage can be specified as 'none' (The name of the API's stage that you want to use for this mapping. Specify '(none)' if you want callers to explicitly specify the stage name after any base path name.), doing this returns an error:
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the CreateBasePathMapping operation: Invalid stage identifier specified
What is funny (or frustrating) is that I have some custom domain names created with the old console and that are perfectly working, without any stage defined.
It is still possible to specify only the "API ID" and "Path" and leave out the "stage" parameter. I have tried this both from the console and the CLI:
From console: The "Stage" setting is a drop-down as you mentioned, but can be left blank (don't select anything). If you did select a stage, delete the API mapping and add it again
From CLI: Just tried this as well and works fine for me on CLI version aws-cli/1.18.69 Python/3.7.7 Darwin/18.7.0 botocore/1.16.19
$ aws apigateway create-base-path-mapping --domain-name **** --rest-api-id *** --base-path test
{
"basePath": "test",
"restApiId": "***"
}

I'm getting an error creating a AWS AppSync Authenticated DataSource

I working through the Build On Serverless|S2 E4 video and I've gotten to the point of creating an authenticated HTTP datasource using the AWS CLI. I'm getting this error.
Parameter validation failed:
Unknown parameter in httpConfig: "authorizationConfig", must be one of: endpoint
I think I'm using the same information provided in the video, repository and gist, updated for my own aws account. It seems like it's some kind of formatting or missing information error, but, I'm just not seeing the problem.
When I remove the "authorizationConfig" property from the state-machine-datasource.json the command works.
I've reviewed the code against the information in the video as well as documentation and examples here and here provided by aws
This is the command I'm running.
aws appsync create-data-source --api-id {my app sync app id} --name ProcessBookingStateMachine
--type HTTP --http-config file://src/backend/booking/state-machine-datasource.json
--service-role-arn arn:aws:iam::{my account}:role/AppSyncProcessBookingState --profile default
This is my state-machine-datasource.json:
{
"endpoint": "https://states.us-east-2.amazonaws.com",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "us-east-2",
"signingServiceName": "states"
}
}
}
Thanks,
I needed to update my aws cli to the latest version. The authenticated http datasource is something fairly new I guess.

Cloudformation - Redeploy environment that uses a recordset (with Jenkins)

TL;DR - What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
We're just starting to use AWS with a new project, and as part of the project I've been tasked with creating a simple demo environment, and updating this environment each night to show the previous days progress.
I'm using Jenkins and the Cloudformation plugin to do this, and it works great in creating a simple EC2 instance in an existing security group, pointed to by a Route53 CNAME so it can be browsed at subdomain.example.com.
The problem I have is that I can't redeploy the same stack, because the recordset already exists, and CF won't overwrite it.
There are lots of guides on how to deploy an environment, but I'm struggling to find one on how to keep an environment up to date.
So I guess my question is: What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
I agree with the comments in your question i.e. probably better to create a clean server and upload / update to it via continuous integration (Jenkins). Docker is super useful in this scenario which you mentioned in a later comment.
However, if you are leaning towards "immutable infrastructure" and want everything encapsulated in your CloudFormation template (Including creating a record in Route53) you could do something like the following code snippet in your AWS::CloudFormation::Init section - (See 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource- init.html' for more info)
"Resources": {
"MyServer": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets" : { "Install" : [ "UpdateRoute53", "ConfigSet2, .... ] },
"UpdateRoute53" : {
"files" : {
"/usr/local/bin/cli53" : {
"source" : "https://github.com/barnybug/cli53/releases/download/0.6.3/cli53-linux-amd64",
"mode" : "000755", "owner" : "root", "group" : "root"
},
"/tmp/update_route53.sh" : {
"content" : { "Fn::Join" : ["", [
"#!/bin/bash\n\n",
"PRIVATE_IP=`curl http://169.254.169.254/latest/meta-data/local-ipv4/`\n",
"/usr/local/bin/cli53 rrcreate ",
{"Ref": "Route53HostedZone" },
" \"", { "Ref" : "ServerName" },
" 300 A $PRIVATE_IP\" --replace\n"
]]},
"mode" : "000755", "owner" : "root", "group" : "root"
}
},
"commands" : {
"01_UpdateRoute53" : {
"command" : "/tmp/update_route53.sh > /tmp/update-route53.log 2>&1"
}
}
}
}
},
"Properties": { ... }
}
}
....
I've ommitted large chunks of the template to focus on the important info. The "UpdateRoute53" section creates 2 files:
/usr/local/bin/cli53 - CLI53 is a great little wrapper program around AWS Route53 (as AWS CLI version of route53 is pretty horrible to use i.e. requires creating large chunks of JSON) - see https://github.com/barnybug/cli53 for more info on CLI53
/tmp/update_route53.sh - creates a script to upload to Route53 via the CLI53 script we installed in (1). This script determines the PRIVATE_IP via a curl comand to the special AWS meta data endpoint (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html for more details). The "zone id" to the correct hosted zone is injected via a CloudFormation parameter (i.e. {"Ref": "Route53HostedZone" }). Finally the name of the record comes from the "ServerName" parameter but how this is set could vary from template to template.
In the "commands" section we run the script we created in "files" section (2) and output the results a log file in the /tmp folder.
NOTE (1) - The parameter Route53HostedZone can be declared as follows: -
"Route53HostedZone": {
"Description": "Route 53 hosted zone for updating internal DNS",
"Type": "AWS::Route53::HostedZone::Id",
"Default": "VIWIWK4PYAC23B"
}
The cool thing about the "AWS::Route53::HostedZone::Id") parameter type is that it displays a combo box (when running a CloudFormation template via the AWS web console) showing the zone name with the value being the Zone ID.
NOTE (2) - The --replace attribute in the CLI53 script overrides existing records which is probably what you want.
NOTE (3) - Another option would be to SSH via Jenkins (e.g. using the the "Publish Over SSH Plugin" - https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin), determine the private IP and using the CLI53 script update Route53 either from the server you've logged into or even the build server (when Jenkins is running).
Lots of options - hope you get it sorted! :-)