Editing API Documentation in CDK - amazon-web-services

In the AWS Documentation for API Gateway, there are ways to edit your API documentation in the console. Working in CDK, I can't find any way to achieve the same thing. The goal is to create the exact same outputs.
Question 1:
See API Gateway documentation in console. This shows how you can edit pretty much everything you need to get nice headings and so on in your swagger / redoc outputs. But I can't find any way of inserting chunks of yaml / json into the doc in cdk.
Question 2:
Is it possible to prevent your exported OAS file from including all of the options methods? I want to automate the process of updating the API docs after cdk deploy, so it should be done as part of the code.
Question 3:
How can you add tags to break your API into logical groupings. Again, this is something that is very useful in standard API documentation, but I can't find the related section in cdk anywhere?
Really, I think AWS could knock up a short petstore example to help us all out. If I get it working, perhaps I'll come back here and post up one of my own with notes.

Question 1 & Question 3:
import * as apigateway from 'aws-cdk-lib/aws-apigateway';
//https://docs.aws.amazon.com/apigateway/latest/api/API_DocumentationPart.html
new apigateway.CfnDocumentationPart(this, 'GetDocumentationPart', {
location: {
method: 'GET',
path: '/my/path',
type: 'METHOD',
},
properties: `{
"tags": ["example"],
"description": "This is a description of the method."
}`,
restApiId: 'api-id',
});
//https://docs.aws.amazon.com/apigateway/latest/api/API_DocumentationVersion.html
new apigateway.CfnDocumentationVersion(
this,
'DocumentationVersion',
{
documentationVersion: 'generate-version-id',
restApiId: 'api-id',
}
);
Notice that you need to generate a version to publish the documentation changes. For the version value, I'll suggest generating a UUID.
According to the example above, the GET /my/path will be grouped by the "example" tag at a Swagger UI.
Question 2:
No, it is not possible.
I solved it by creating a lambda function listening to the api-gateway deployment event, getting the JSON file from the api-gateway via AWS SDK, parsing it for removing unwanted paths, and storing it in an S3 bucket.

Related

Incorrect schema for yaml file using Serverless Framework Configuration - expected a string

I am using serverless framework configuration and aws for the serverless.yml file and also the yaml plugin on VSCode and it says that is incorrect type and is expecting a string.
events:
- cloudwatchEvent:
name: ${self:custom.transcribeJobName.${self:provider.stage}}
event:
source:
- 'aws.transcribe'
detail-type:
- 'Transcribe Job State Change'
detail:
TranscriptionJobStatus:
- COMPLETED
- FAILED
The documentation of Serverless Framework no AWS events doesn't list AWS Transcribe at all. I also tried googling but couldn't find any native event for that.
https://www.serverless.com/framework/docs/providers/aws/events
So can you share which documentation or blog you're following to implement your solution?
Got this in my feed, the error is coming from serverless json-schema which isn't always up to date since it isn't maintained by the same team.
I've created the PRs for both serverless json schema and for serverless types as well to get this issue sorted out at the source.
It's obviously not thorough and compelete, but the best I cloud do.
https://github.com/lalcebo/json-schema/pull/13
https://github.com/DefinitelyTyped/DefinitelyTyped/pull/59966
Edit:- schema updated, vscode should stop complaining on restart.

Can you start AI platform jobs from HTTP requests?

I have a web app (react + node.js) running on App Engine.
I would like to kick off (from this web app) a Machine Learning job that requires a GPU (running in a container on AI platform or running on GKE using a GPU node pool like in this tutorial, but we are open to other solutions).
I was thinking of trying what is described at the end of this answer, basically making an HTTP request to start the job using project.job.create API.
More details on the ML job in case this is useful: it generates an output every second that is stored on Cloud Storage and then read in the web app.
I am looking for examples of how to set this up? Where would the job configuration live and how should I set up the API call to kick off that job? Are the there other ways to achieve the same result?
Thank you in advance!
On Google Cloud, all is API, and you can interact with all the product with HTTP request. SO you can definitively achieve what you want.
I personally haven't an example but you have to build a JSON job description and post it to the API.
Don't forget, when you interact with Google Cloud API, you have to add an access token in the Authorization: Bearer header
Where should be your job config description? It depends...
If it is strongly related to your App Engine app, you can add it in App Engine code itself and have it "hard coded". The downside of that option is anytime you have to update the configuration, you have to redeploy a new App Engine version. But if your new version isn't correct, a rollback to a previous and stable version is easy and consistent.
If you prefer to update differently your config file and your App Engine code, you can store the config out of App Engine code, on Cloud Storage for instance. Like that, the update is simple and easy: update the config on Cloud Storage to change the job configuration. However there is no longer relation between the App Engine version and the config version. And the rollback to a stable version can be more difficult.
You can also have a combination of both, where you have a default job configuration in your App Engine code, and an environment variable potentially set to point to a Cloud Storage file that contain a new version of the configuration.
I don't know if it answers all your questions. Don't hesitate to comment if you want more details on some parts.
As mentionated, you can use the AI Platform api to create a job via a post.
Following is an example using Java Script and request to trig a job.
Some usefull tips:
Jobs console to create a job manually, then use the api to list this job then you will have a perfect json example of how to trig it.
You can use the Try this API tool to get the json output of the manually created job. Use this path to get the job: projects/<project name>/jobs/<job name>.
Get the authorization token using the OAuth 2.0 Playground for tests purposes (Step 2 -> Access token:). Check the docs for a definitive way.
Not all parameters are required on the json, thtas jus one example of the job that I have created and got the json using the steps above.
JS Example:
var request = require('request');
request({
url: 'https://content-ml.googleapis.com/v1/projects/<project-name>/jobs?alt=json',
method: 'POST',
headers: {"authorization": "Bearer ya29.A0AR9999999999999999999999999"},
json: {
"jobId": "<job name>",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "standard",
"workerType": "cloud_tpu",
"workerCount": "1",
"args": [
"--training_data_path=gs://<bucket>/*.jpg",
"--validation_data_path=gs://<bucket>/*.jpg",
"--num_classes=2",
"--max_steps=2",
"--train_batch_size=64",
"--num_eval_images=10",
"--model_type=efficientnet-b0",
"--label_smoothing=0.1",
"--weight_decay=0.0001",
"--warmup_learning_rate=0.0001",
"--initial_learning_rate=0.0001",
"--learning_rate_decay_type=cosine",
"--optimizer_type=momentum",
"--optimizer_arguments=momentum=0.9"
],
"region": "us-central1",
"jobDir": "gs://<bucket>",
"masterConfig": {
"imageUri": "gcr.io/cloud-ml-algos/image_classification:latest"
}
},
"trainingOutput": {
"consumedMLUnits": 1.59,
"isBuiltInAlgorithmJob": true,
"builtInAlgorithmOutput": {
"framework": "TENSORFLOW",
"runtimeVersion": "1.15",
"pythonVersion": "3.7"
}
}
}
}, function(error, response, body){
console.log(body);
});
Result:
...
{
createTime: '2022-02-09T17:36:42Z',
state: 'QUEUED',
trainingOutput: {
isBuiltInAlgorithmJob: true,
builtInAlgorithmOutput: {
framework: 'TENSORFLOW',
runtimeVersion: '1.15',
pythonVersion: '3.7'
}
},
etag: '999999aaaac='
Thank you everyone for the input. This was useful to help me resolve my issue, but I wanted to also share the approach I ended up taking:
I started by making sure I could kick off my job manually.
I used this tutorial with a config.yaml file that looked like this:
workerPoolSpecs:
machineSpec:
machineType: n1-standard-4
acceleratorType: NVIDIA_TESLA_T4
acceleratorCount: 1
replicaCount: 1
containerSpec:
imageUri: <Replace this with your container image URI>
args: ["--some=argument"]
When I had a job that could be kicked off manually, I switched to using
the Vertex AI Node.js API to start the job or cancel it. The API exists in other languages.
I know my original question was about HTTP requests, but having an API in the language was a lot easier for me, in particular because I didn't have to worry about authentification.
I hope that is useful, happy to provide mode details if needed.

List all LogGroups using cdk

I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.

How to setup AWS API gateway resource to take matrix parameters?

I have been trying to setup a resource using AWS API Gateway but I can't seem to find a way to either set or access matrix parameters.
I want to be able to set up a resource similar to the following -
GET /image;height=750;width=1000;format=png
Is it possible ?
You need to configure the setup to use Query Parameters.
You do this in the Method Request area of a method configuration from within the console:
https://console.aws.amazon.com/apigateway/home?region=<region-id>#/restapis/<api-id>/resources/<resource-id>/methods/<method-type>
You can also do this use the AWS API Gateway HTTP API putMethod endpoint, or the AWS#APIGateway#putMethod call in any of their SDKs.
API Gateway currently does not support matrix parameters. As a workaround, you could use query parameters as already mentioned and parse them in your backend.
Best,
Jurgen
I realize that this is a very old question. Leaving my response in case someone has a similar challenge
There are multiple ways to set this up. It all comes down to the context in which the API is intended to be used.
While query parameters will solve the problem, they are not the best suited for representing a resource. They fit well with scenarios that involve filtering. If this API is intended to be used as a source for <img /> tags on the UI, this pattern GET .../images/{widthxheight}/{imageName}.{extension} can be used.
Ex: GET .../images/200x400/sponge-bob.png
However, if the intent is for this API to be used for the purpose of looking up. the below definition can be used -
POST .../image-results
Content-Type: application/json
{
"name":"sponge bob",
"height": 400,
"width": 200,
"format": "png"
}

Custom endpoint in AWS powershell

I am trying to use AWS Powershell with Eucalyptus.
I can do this with AWS CLI with parameter --endpoint-url.
Is it possible to set endpoint url in AWS powershell?
Can I create custom region with my own endpoint URL in AWS Powershell?
--UPDATE--
The newer versions of the AWS Tools for Windows PowerShell (I'm running 3.1.66.0 according to Get-AWSPowerShellVersion), has an optional -EndpointUrl parameter for the relevant commands.
Example:
Get-EC2Instance -EndpointUrl https://somehostnamehere
Additionally, the aforementioned bug has been fixed.
Good stuff!
--ORIGINAL ANSWER--
TL;TR
Download the default endpoint config file from here: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json
Customize it. Example:
{
"version": 2,
"endpoints": {
"*/*": {
"endpoint": "your_endpoint_here"
}
}
}
After importing the AWSPowerShell module, tell the SDK to use your customized endpoint config. Example:
[Amazon.AWSConfigs]::EndpointDefinition = "path to your customized Amazon.endpoints.json here"
Note: there is a bug in the underlying SDK that causes endpoints that have a path component from being signed correctly. The bug affects this solution and the solution #HyperAnthony proposed.
Additional Info
Reading through the .NET SDK docs, I stumbled across a section that revealed that one can global set the region rules given a file: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-other.html#config-setting-awsendpointdefinition
Unfortunately, I couldn't find anywhere where the format of such a file is documented.
I then splunked through the AWSSDK.Core.dll code and found where the SDK loads the file (see LoadEndpointDefinitions() method at https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/RegionEndpoint.cs).
Reading through the code, if a file isn't explicitly specified on AWSConfigs.EndpointDefinition, it ultimately loads the file from an embedded resource (i.e. https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json)
I don't believe that it is. This list of common parameters (that can be used with all AWS PowerShell cmdlets) does not include a Service URL, it seems instead to opt for a simple string Region to set the Service URL based on a set of known regions.
This AWS .NET Development forum post suggests that you can set the Service URL on a .NET SDK config object, if you're interested in a possible alternative in PowerShell. Here's an example usage from that thread:
$config=New-Object Amazon.EC2.AmazonEC2Config
$config.ServiceURL = "https://ec2.us-west-1.amazonaws.com"
$client=[Amazon.AWSClientFactory]::CreateAmazonEC2Client($accessKeyID,$secretKeyID,$config)
It looks like you can use it with most config objects when setting up a client. Here's some examples that have the ServiceURL property. I would imagine that this is on most all AWS config objects:
AmazonEC2Config
AmazonS3Config
AmazonRDSConfig
Older versions of the documentation (for v1) noted that this property will be ignored if the RegionEndpoint is set. I'm not sure if this is still the case with v2.