I want to get started with zappa framework, but when I'm running init zappa I got this notification:
How to disable autocreate s3 bucket?
You can't. Zappa first uploads your .zip file to a bucket and from there does the deployment.
From the official repo:
Zappa will automatically package up your application, [...] upload the archive to S3, create and manage the necessary Amazon IAM policies and roles, register it as a new Lambda function, create a new API Gateway resource, create WSGI-compatible routes for it, link it to the new Lambda function, and finally delete the archive from your S3 bucket. Handy!
So your option is to dig into Zappa and circumvent this on your own or perhaps try Chalice that does the upload directly.
#mislav is correct that Zappa does need an S3 bucket. But one only gets auto-created if you don't specify one. Simply provide a valid bucket name at the prompt, and Zappa will use that bucket instead of creating one for you.
Related
Details - I have a CircleCI job that makes a zip of my lambda code and uploads it to S3 (We just keep updating the version of same s3 object for e.g. code.zip we dont change name).
Now i have CDK AWS code where i am defining the body of my lambda and making use of the s3 zip file using this url https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.Code.html#static-fromwbrbucketbucket-key-objectversion.
Issue - Now i want something automated deployment that whenever there is new version of code.zip file gets uploaded to S3, my all lambdas using should be automatically updated with the latest code.
Please suggest !!!
I can think of 2 solution for this
Have a step after you update the latest code in S3 to update your lambda function like below
aws lambda update-function-code
--function-name your_function_name
--s3-bucket --s3-key your_code.zip
Create another lamda function and create S3 create object or whatever event suits for you and even you can filter by .zip
And in you lambda function which will be triggered by S3 upload you can again use same AWS cli command to update your lambda function
I am using serverless deploy my infra to AWS. Each time I change the stack name, a new s3 bucket is created to serve deployment archive data. It ends up to create 90 buckets in my account. Is there a way to let serverless use one s3 bucket and create one folder for each stage or stack?
You can reuse the same S3 bucket https://medium.com/serverlessguru/how-to-reuse-an-aws-s3-bucket-for-multiple-serverless-framework-deployments-d1673d3d8259 .
I am trying to create a CloudFormation Template (CFT) for a S3 Bucket that needs to be "PublicRead" and that also has "Requester Pays" turned on.
I have looked at the documentation for S3 Bucket CFTs: AWS::S3::Bucket - AWS CloudFormation
Also I have looked at the documentation for "Requester Pays", but it fails to mention anything about CFTs. It only references enabling it through the console and with the REST API:
Requester Pays Buckets - Amazon Simple Storage Service
Right now we are trying to get all our infrastructure into infrastructure as code, but this is a somewhat large blocker for that. I have heard that other people have had trouble with CFTs not supporting some features from AWS services, but usually those are for unpopular/newer services. I would think that CFT would support all the options that S3 has for buckets.
You are correct. The CloudFormation AWS::S3::Bucket resources does not support Requester Pays.
The enable it, you would need to make an API call such as put_bucket_request_payment():
Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download.
response = client.put_bucket_request_payment(
Bucket='string',
RequestPaymentConfiguration={
'Payer': 'Requester'|'BucketOwner'
}
)
This could be done by adding an AWS Lambda custom resource to the CloudFormation template, or by using the AWS CLI from an Amazon EC2 instance that is created as part of the stack.
AWS s3 allows setting the metadata of a S3 object in the console as described on this page: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-object-metadata.html.
How can I configure that on the serverless.yml?
I figured it out. It can be done with serverless-s3-sync. https://yingzuo.io/set-metadata-of-s3-objects-on-serverless-yml
I am trying to deploy a Lambda function to AWS from S3.
My organization currently does not provide the ability for me to upload files to the root of an S3 bucket, but only to a folder (ie: s3://application-code-bucket/Application1/).
Is there any way to deploy the Lambda function code through S3, from a directory other than the bucket root? I checked the documentation for Lambda's CreateFunction AWS command and could not find anything obvious.
You need to zip your lambda package and upload to S3 in any folder.
You can then provide an https S3 url of the file to upload to lambda
function.
The S3 bucket needs to be in the same region as that of the lambda
function.
Make sure you zip from the folder, i.e when the package is unzipped,
the files should be extracted in the same directory as the unzip
command, and should not create a new directory for the contents.
I have this old script of mine that I used to automate lambda deployments.
It needs to be refactored a bit, but still usable.
It gets as input the lambda name and the zip file path located locally on your PC.
It uploads it to S3 and publishes to the AWS Lambda.
You need to set AWS credentials with IAM roles that allows:
S3 upload permission
AWS Lambda update permission
You need to modify the bucket name and the path you want your zip to be uploaded to. (lines 36-37).
That's it.