I want to create API Gateway connected to Lambda function with Alias, I have IntegrationRouteTargetProvider which is providing integration routes to the API. I got the URI from the lambda so I think it is correct. Also I checked number of SO questions and also in the documentation is arn:aws:apigateway:{region}:{subdomain.service|service}:path|action/{service_api}.
My uri is
arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:051069080387:function:deploy-test-4-lambda/invocations.
However when I'm trying to create the api I get error:
Unable to put integration on 'ANY' for resource at path '/': Integrations of type 'AWS_PROXY' currently only supports Lambda function and Firehose stream invocations.
Here is my IntegrationRouteTargetProvider:
export class AliasLambdaProvider implements IntegrationRouteTargetProvider {
target(name: string, parent: pulumi.Resource): pulumi.Input<IntegrationTarget> {
return {
type: "aws_proxy",
uri: 'arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:051069080387:function:deploy-test-4-lambda/invocations',
};
}
}
and than using it when creating API
return new API(name, {
routes: [
{
path: "/",
target: new AliasLambdaProvider()
}
],
stageName: name + "-stage"
}, { provider });
You're using:
type: "aws_proxy"
Which means everything is passed to the lambda and headers need to be handled at the lambda. Integration is disabled for AWS_Proxy type, thus the error. However, if you wanted to define integration methods/mapping templates etc, use:
type: "aws"
Related
I have the following serverless.yaml:
getSth:
handler: src/handlers/getSth.getSth
events:
- http:
path: getSth
method: get
cors: true
private: true
authorizer: authorizerFunc
authorizerFunc:
handler: src/handlers/authorizer.authorizer
getSth handler:
module.exports.getSth = async (event, context) => {
const response = {
statusCode: 200,
body: JSON.stringify({message: "nice you can call this});
}
return response;
}
authorizerFunc:
module.exports.authorizer = async (event, context) => {
console.log('i will fail your authorization');
let response = {
isAuthorized: false,
context: {
stringKey: "value",
numberKey: 1,
booleanKey: true,
arrayKey: ["value1", "value2"],
mapKey: { value1: "value2" },
},
};
return response;
}
That results in getting respons 200 in spite of the fact authorizer should not allow to execute that getSth function. Also console log 'I will fail your authorization' is not logged.
What am I doing wrong ?
I have tried to analyse your code and find several points where you can start digging.
Private functions
The key private: true actually makes API Gateway require an API key. I did now not try myself but perhaps private: true and an authorizer do not go together.
Strange still that you are able to call the function then. How do you call the function? From the CLI or through API Gateway and an API testing tool such as Postman or Insomnia?
Authorizer Configuration
Your authorizer configuration is definitely correct. We do have the very same setup in our code.
Authorizer Events
An authorizer function gets an APIGatewayTokenAuthorizerEvent in and should reply with a APIGatewayAuthorizerResult. The latter looks closely like an IAM statement and we do not use the field isAuthorized: false as per your example. I do not understand where this field is coming from. Our result to allow a request looks more or less like the following:
{
"principalId": "<our auth0 user-id>",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": "*"
}]
}
}
Note how the field principalId refers to the username we get from our identity provider (Auth0). And in reality looks something like this: auth0|6f84a3z162c72d0d0d000a00.
Further, we can allow or deny the function call via the Effect field which can hold the values Allow or Deny.
Finally, you can specify which resource the caller is permitted to call. For simplicity of this answer I put * there. Of course in the real world you can pull the ARN of the called function from the event and context and pass that into the policy document.
Opinion
We also had a hard way of figuring this out via documentation from AWS. Of course for AWS the preferred integration would be via AWS Cognito (which I also do prefer due to the more streamlined integration. We benefited quite a bit form the use of TypeScript here which we use to enforce types in and out of our Serverless functions. This way it was rather easy to figure out how the response needs to look like.
Background
We use the custom authorizer integration to allow a user base already existing in Auth0 consume our Serverless based APIs via application clients or single page applications.
I would love to be able to update an existing lambda function via AWS CDK. I need to update the environment variable configuration. From what I can see this is not possible, is there something workable to make this happen?
I am using code like this to import the lambda:
const importedLambdaFromArn = lambda.Function.fromFunctionAttributes(
this,
'external-lambda-from-arn',
{
functionArn: 'my-arn',
role: importedRole,
}
);
For now, I have to manually alter a cloudformation template. Updating directly in cdk would be much nicer.
Yes, it is possible, although you should read #Allan_Chua's answer before actually doing it. Lambda's UpdateFunctionConfiguration API can modify a deployed function's environment variables. The CDK's AwsCustomResource construct lets us call that API during stack deployment.*
Let's say you want to set TABLE_NAME on a previously deployed lambda to the value of a DynamoDB table's name:
// MyStack.ts
const existingFunc = lambda.Function.fromFunctionArn(this, 'ImportedFunction', arn);
const table = new dynamo.Table(this, 'DemoTable', {
partitionKey: { name: 'id', type: dynamo.AttributeType.STRING },
});
new cr.AwsCustomResource(this, 'UpdateEnvVar', {
onCreate: {
service: 'Lambda',
action: 'updateFunctionConfiguration',
parameters: {
FunctionName: existingFunc.functionArn,
Environment: {
Variables: {
TABLE_NAME: table.tableName,
},
},
},
physicalResourceId: cr.PhysicalResourceId.of('DemoTable'),
},
policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
resources: [existingFunc.functionArn],
}),
});
Under the hood, the custom resource creates a lambda that makes the UpdateFunctionConfiguration call using the JS SDK when the stack is created. There are also onUpdate and onDelete cases to handle.
* Again, whether this is a good idea or not depends on the use case. You could always call UpdateFunctionConfiguration without the CDK.
The main purpose of CDK is to enable AWS customers to have the capability to automatically provision resources. If we're attempting to update settings of pre-existing resources that were managed by other CloudFormation stacks, it is better to update the variable on its parent CloudFormation template instead of CDK. This provides the following advantages on your side:
There's a single source of truth of what the variable should look like
There's no tug o war between the CDK and CloudFormation template whenever an update is being pushed from these sources.
Otherwise, since this is a compute layer, just get rid of the lambda function from CloudFormation and start full CDK usage altogether bro!
Hope this advise helps
If you are using AWS Amplify, the accepted answer will not work and instead you can do this by exporting a CloudFormation Output from your custom resource stack and then referencing that output using an input parameter in the other stack.
With CDK
new CfnOutput(this, 'MyOutput', { value: 'MyValue' });
With CloudFormation Template
"Outputs": {
"MyOutput": {
"Value": "MyValue"
}
}
Add an input parameter to the cloudformation-template.json of the resource you want to reference your output value in:
"Parameters": {
"myInput": {
"Type": "String",
"Description": "A custom input"
},
}
Create a parameters.json file that passes the output to the input parameter:
{
"myInput": {
"Fn::GetAtt": ["customResource", "Outputs.MyOutput"]
}
}
Finally, reference that input in your stack:
"Resources": {
"LambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Environment": {
"Variables": {
"myEnvVar": {
"Ref": "myInput"
},
}
},
}
}
}
I'm using a HTTP API Gateway to trigger a lambda invocation. When I use the url from postman, no issues. When I use it from my browser, it always makes a 2nd request, for the favicon.
Is there anyway in the gateway itself to block the favicon request from getting to the lambda?
I'm using the following terraform:
resource "aws_apigatewayv2_api" "retry_api" {
name = "${var.environment}_${var.cdp_domain}_retry_api"
protocol_type = "HTTP"
description = "To pass commands into the retry lambda."
target = module.retry-support.etl_lambda_arn
}
resource "aws_lambda_permission" "allow_retry_api" {
statement_id = "AllowAPIgatewayInvokation"
action = "lambda:InvokeFunction"
function_name = module.retry-support.etl_lambda_arn
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.retry_api.execution_arn}/*/*"
}
This won't block the favicon request made from the browser, rather won't invoke the Lambda for those requests.
Assuming the API endpoint is /hello and the http method is GET, you can restrict api-gateway to invoke the lambda for only this URL. The format would be like this.
arn:${AWS::Partition}:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/${__Stage__}/GET/hello
So the source_arn in aws_lambda_permission would change to something like this
source_arn = "${aws_apigatewayv2_api.retry_api.execution_arn}/*/*/GET/hello"
The answer assumes the existing / in the end is for apiId and stage respectively. Otherwise check the value for ${aws_apigatewayv2_api.retry_api.execution_arn} and make modifications accordingly.
This answer can also help. You can provide the openapi specification in the body for your supported path only. For the above case the relevant path section of the openapi specification invoking a Lambda named HelloWorldFunction would look like
"paths": {
"/hello": {
"get": {
"x-amazon-apigateway-integration": {
"httpMethod": "POST",
"type": "aws_proxy",
"uri": {
"Fn::Sub": "arn:${AWS::Partition}:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${HelloWorldFunction.Arn}/invocations"
},
"payloadFormatVersion": "2.0"
},
"responses": {} //Provide the expected response model
}
}
}
Here is a link to OpenApi Specification.
Normally, I would do this by putting cloudfront in front of the API gateway, and map the favicon.ico to an S3 bucket.
If you really want to handle it at the API GW level, you can create a /favicon.ico route, and set the integration to MOCK - this will return a specific value, and not invoke lambda (or any other back end).
This AWS CloudFormation document suggests that it is possible to administer an 'AWS::SSM::Document' resource with a DocumentType of 'Package'. However the 'Content' required to achieve this remains a mystery.
Is it possible to create a Document of type 'Package' via CloudFormation, and if so, what is the equivalent of this valid CLI command written as a CloudFormation template (preferably with YAML formatting)?
ssm create-document --name my-package --content "file://manifest.json" --attachments Key="SourceUrl",Values="s3://my-s3-bucket" --document-type Package
Failed Attempt. The content used is an inline version of the manifest.json which was provided when using the CLI option. There doesn't seem to be an option to specify an AttachmentSource when using CloudFormation:
AWSTemplateFormatVersion: 2010-09-09
Resources:
Document:
Type: AWS::SSM::Document
Properties:
Name: 'my-package'
Content: !Sub |
{
"schemaVersion": "2.0",
"version": "Auto-Generated-1579701261956",
"packages": {
"windows": {
"_any": {
"x86_64": {
"file": "my-file.zip"
}
}
}
},
"files": {
"my-file.zip": {
"checksums": {
"sha256": "sha...."
}
}
}
}
DocumentType: Package
CloudFormation Error
AttachmentSource not provided in the input request. (Service: AmazonSSM; Status Code: 400; Error Code: InvalidParameterValueException;
Yes, this is possible! I've successfully created a resource with DocumentType: Package and the package shows up in the SSM console under Distributor Packages after the stack succeeds.
Your YAML is almost there, but you need to also include the Attachments property that is now available.
Here is a working example:
AWSTemplateFormatVersion: "2010-09-09"
Description: Sample to create a Package type Document
Parameters:
S3BucketName:
Type: "String"
Default: "my-sample-bucket-for-package-files"
Description: "The name of the S3 bucket."
Resources:
CrowdStrikePackage:
Type: AWS::SSM::Document
Properties:
Attachments:
- Key: "SourceUrl"
Values:
- !Sub "s3://${S3BucketName}"
Content:
!Sub |
{
"schemaVersion": "2.0",
"version": "1.0",
"packages": {
"windows": {
"_any": {
"_any": {
"file": "YourZipFileName.zip"
}
}
}
},
"files": {
"YourZipFileName.zip": {
"checksums": {
"sha256": "7981B430E8E7C45FA1404FE6FDAB8C3A21BBCF60E8860E5668395FC427CE7070"
}
}
}
}
DocumentFormat: "JSON"
DocumentType: "Package"
Name: "YourPackageNameGoesHere"
TargetType: "/AWS::EC2::Instance"
Note: for the Attachments property you must use the SourceUrl key when using DocumentType: Package. The creation process will append a "/" to this S3 bucket URL and concatenate it with each file name you have listed in the manifest that is the Content property when it creates the package.
Seems there is no direct way to create an SSM Document with Attachment via CloudFormation (CFN). You can use a workaround as using a backed Lambda CFN where you will use a Lambda to call the API SDK to create SSM Document then use Custom Resource in CFN to invoke that Lambda.
There are some notes on how to implement this solution as below:
How to invoke Lambda from CFN: Is it possible to trigger a lambda on creation from CloudFormation template
Sample of a Lambda sending response format (when using Custom Resource in CFN): https://github.com/stelligent/cloudformation-custom-resources
In order to deploy Lambda with best practices and easy upload the attachment, Document content from local, you should use sam deploy instead of CFN create stack.
You can get information of the newly created resource from lambda to the CFN by adding the resource detail into the data json in the response lambda send back and the CFN can use it with !GetAtt CustomResrc.Attribute, you can find more detail here.
There are some drawbacks on this solution:
Add more complex to the original solution as you have to create resources for the Lambda execution such as (S3 to deploy Lambda, Role for Lambda execution and assume the SSM execution, SSM content file - or you have to use a 'long' inline content). It won't be a One-call CFN create-stack anymore. However, you can put everything into the SAM template because at the end of the day, it's just a CFN template
When Delete the CFN stack, you have to implement the lambda when RequestType == Delete for cleaning up your resource.
PS: If you don't have to work strictly on CFN, then you can try with Terraform: https://www.terraform.io/docs/providers/aws/r/ssm_document.html
I want to create a secure APIG using serverless, in my current "s-fuction.json" I've already have:
"apiKeyRequired": true,
And in my "s-resources-cf.json" I already have:
"AWSApiKey": {
"Type": "AWS::ApiGateway::ApiKey",
"Properties" : {
"Description" : "ApiKey for secure the connections to the xxx API",
"Enabled" : true
}
}
It correctly creates all, a Lambda, an APIG for that lambda (including CORS) and the API Key, but I need to manually "assign" the key to the generated APIG-Stage, do you have any ideas on how could I do this automatically using serverless?
I've read the AWS documentation about the feature I want (and It seems it is possible) from here: AWS CloudFormation API Key
The documentation shows that it can be done by:
"ApiKey": {
"Type": "AWS::ApiGateway::ApiKey",
"DependsOn": ["TestAPIDeployment", "Test"],
"Properties": {
"Name": "TestApiKey",
"Description": "CloudFormation API Key V1",
"Enabled": "true",
"StageKeys": [{
"RestApiId": { "Ref": "RestApi" },
"StageName": "Test"
}]
}
}
But I don't know how add a reference to the APIG automatically created by serverless and how to wait for that APIG is created.
You can specify a list of API keys to be used by your service Rest API by adding an apiKeys array property to the provider object in serverless.yml. You'll also need to explicitly specify which endpoints are private and require one of the api keys to be included in the request by adding a private boolean property to the http event object you want to set as private. API Keys are created globally, so if you want to deploy your service to different stages make sure your API key contains a stage variable as defined below. When using API keys, you can optionally define usage plan quota and throttle, using usagePlan object.
Here's an example configuration for setting API keys for your service Rest API:
service: my-service
provider:
name: aws
apiKeys:
- myFirstKey
- ${opt:stage}-myFirstKey
- ${env:MY_API_KEY} # you can hide it in a serverless variable
usagePlan:
quota:
limit: 5000
offset: 2
period: MONTH
throttle:
burstLimit: 200
rateLimit: 100
functions:
hello:
events:
- http:
path: user/create
method: get
private: true
For more info read the following doc:
https://serverless.com/framework/docs/providers/aws/events/apigateway